Science.gov

Sample records for covariate-adjusted nonlinear regression

  1. Covariate-adjusted response-adaptive designs for binary response.

    PubMed

    Rosenberger, W F; Vidyashankar, A N; Agarwal, D K

    2001-11-01

    An adaptive allocation design for phase III clinical trials that incorporates covariates is described. The allocation scheme maps the covariate-adjusted odds ratio from a logistic regression model onto [0, 1]. Simulations assume that both staggered entry and time to response are random and follow a known probability distribution that can depend on the treatment assigned, the patient's response, a covariate, or a time trend. Confidence intervals on the covariate-adjusted odds ratio is slightly anticonservative for the adaptive design under the null hypothesis, but power is similar to equal allocation under various alternatives for n = 200. For similar power, the net savings in terms of expected number of treatment failures is modest, but enough to make this design attractive for certain studies where known covariates are expected to be important and stratification is not desired, and treatment failures have a high ethical cost.

  2. Covariate adjustment increased power in randomized controlled trials: an example in traumatic brain injury

    PubMed Central

    Turner, Elizabeth L.; Perel, Pablo; Clayton, Tim; Edwards, Phil; Hernández, Adrian V.; Roberts, Ian; Shakur, Haleema; Steyerberg, Ewout W.

    2013-01-01

    Objective We aimed to determine to what extent covariate adjustment could affect power in a randomized controlled trial (RCT) of a heterogeneous population with traumatic brain injury (TBI). Study Design and Setting We analyzed 14-day mortality in 9497 participants in the Corticosteroid Randomisation After Significant Head Injury (CRASH) RCT of corticosteroid vs. placebo. Adjustment was made using logistic regression for baseline covariates of two validated risk models derived from external data (IMPACT) and from the CRASH data. The relative sample size (RESS) measure, defined as the ratio of the sample size required by an adjusted analysis to attain the same power as the unadjusted reference analysis, was used to assess the impact of adjustment. Results Corticosteroid was associated with higher mortality compared to placebo (OR=1.25, 95% CI: 1.13, 1.39). RESS of 0.79 and 0.73 were obtained by adjustment using the IMPACT and CRASH models, respectively, which for example implies an increase from 80% to 88% and 91% power, respectively. Conclusion Moderate gains in power may be obtained using covariate adjustment from logistic regression in heterogeneous conditions such as TBI. Although analyses of RCTs might consider covariate adjustment to improve power, we caution against this approach in the planning of RCTs. PMID:22169080

  3. On variance estimate for covariate adjustment by propensity score analysis.

    PubMed

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  4. Method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  5. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  6. Differential correction schemes in nonlinear regression

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.; Speed, F. M.

    1972-01-01

    Classical iterative methods in nonlinear regression are reviewed and improved upon. This is accomplished by discussion of the geometrical and theoretical motivation for introducing modifications using generalized matrix inversion. Examples having inherent pitfalls are presented and compared in terms of results obtained using classical and modified techniques. The modification is shown to be useful alone or in conjunction with other modifications appearing in the literature.

  7. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  8. Estimating covariate-adjusted measures of diagnostic accuracy based on pooled biomarker assessments.

    PubMed

    McMahan, Christopher S; McLain, Alexander C; Gallagher, Colin M; Schisterman, Enrique F

    2016-07-01

    There is a need for epidemiological and medical researchers to identify new biomarkers (biological markers) that are useful in determining exposure levels and/or for the purposes of disease detection. Often this process is stunted by high testing costs associated with evaluating new biomarkers. Traditionally, biomarker assessments are individually tested within a target population. Pooling has been proposed to help alleviate the testing costs, where pools are formed by combining several individual specimens. Methods for using pooled biomarker assessments to estimate discriminatory ability have been developed. However, all these procedures have failed to acknowledge confounding factors. In this paper, we propose a regression methodology based on pooled biomarker measurements that allow the assessment of the discriminatory ability of a biomarker of interest. In particular, we develop covariate-adjusted estimators of the receiver-operating characteristic curve, the area under the curve, and Youden's index. We establish the asymptotic properties of these estimators and develop inferential techniques that allow one to assess whether a biomarker is a good discriminator between cases and controls, while controlling for confounders. The finite sample performance of the proposed methodology is illustrated through simulation. We apply our methods to analyze myocardial infarction (MI) data, with the goal of determining whether the pro-inflammatory cytokine interleukin-6 is a good predictor of MI after controlling for the subjects' cholesterol levels. PMID:26927583

  9. Cardiovascular Response Identification Based on Nonlinear Support Vector Regression

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Su, Steven W.; Chan, Gregory S. H.; Celler, Branko G.; Cheng, Teddy M.; Savkin, Andrey V.

    This study experimentally investigates the relationships between central cardiovascular variables and oxygen uptake based on nonlinear analysis and modeling. Ten healthy subjects were studied using cycle-ergometry exercise tests with constant workloads ranging from 25 Watt to 125 Watt. Breath by breath gas exchange, heart rate, cardiac output, stroke volume and blood pressure were measured at each stage. The modeling results proved that the nonlinear modeling method (Support Vector Regression) outperforms traditional regression method (reducing Estimation Error between 59% and 80%, reducing Testing Error between 53% and 72%) and is the ideal approach in the modeling of physiological data, especially with small training data set.

  10. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  11. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

  12. Covariate Adjustment Strategy Increases Power in the Randomized Controlled Trial With Discrete-Time Survival Endpoints

    ERIC Educational Resources Information Center

    Safarkhani, Maryam; Moerbeek, Mirjam

    2013-01-01

    In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…

  13. Validity of a Residualized Dependent Variable after Pretest Covariance Adjustments: Still the Same Variable?

    ERIC Educational Resources Information Center

    Nimon, Kim; Henson, Robin K.

    2015-01-01

    The authors empirically examined whether the validity of a residualized dependent variable after covariance adjustment is comparable to that of the original variable of interest. When variance of a dependent variable is removed as a result of one or more covariates, the residual variance may not reflect the same meaning. Using the pretest-posttest…

  14. An Excel Solver Exercise to Introduce Nonlinear Regression

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Business students taking business analytics courses that have significant predictive modeling components, such as marketing research, data mining, forecasting, and advanced financial modeling, are introduced to nonlinear regression using application software that is a "black box" to the students. Thus, although correct models are…

  15. Detecting influential observations in nonlinear regression modeling of groundwater flow

    USGS Publications Warehouse

    Yager, R.M.

    1998-01-01

    Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.

  16. S-PLUS Library For Nonlinear Bayesian Regression Analysis

    SciTech Connect

    Heasler, Patrick G. ); Anderson, Kevin K. ); Hylden, Jeff L. )

    2002-09-25

    This document describes a library of Splus functions used for nonlinear Bayesian regression in general and IR estimation in particular. This library has been developed to solve a general class of problems described by the nonlinear regression model: Y = F (beta,data)+ E where Y represents a vector of measurements, and F(beta,data) represents a Splus function that has been constructed to describe the measurements. The function F(beta,data) depends upon beta, a vector of parameters to be estimated, while data$ is an Splus object containing any other information needed by the model. The errors, E, are assumed to be independent, normal, unbiased and to have known standard deviations of stdev(E) = sd.E. The components in beta are split into two groups; estimation parameters and nuisance parameters. The Bayesian prior on the estimation parameters will generally be non-informative, while the prior on the nuisance parameters will be constructed to reflect the information we have about them. We hope an extended beta distribution is general enough to adequately represent the information we have on them. While we expect these functions to be improved and revised, this library is mature enough to be used without major modification.

  17. Variable selection for covariate-adjusted semiparametric inference in randomized clinical trials

    PubMed Central

    Yuan, Shuai; Zhang, Hao Helen; Davidian, Marie

    2013-01-01

    Extensive baseline covariate information is routinely collected on participants in randomized clinical trials, and it is well-recognized that a proper covariate-adjusted analysis can improve the efficiency of inference on the treatment effect. However, such covariate adjustment has engendered considerable controversy, as post hoc selection of covariates may involve subjectivity and lead to biased inference, while prior specification of the adjustment may exclude important variables from consideration. Accordingly, how to select covariates objectively to gain maximal efficiency is of broad interest. We propose and study the use of modern variable selection methods for this purpose in the context of a semiparametric framework, under which variable selection in modeling the relationship between outcome and covariates is separated from estimation of the treatment effect, circumventing the potential for selection bias associated with standard analysis of covariance methods. We demonstrate that such objective variable selection techniques combined with this framework can identify key variables and lead to unbiased and efficient inference on the treatment effect. A critical issue in finite samples is validity of estimators of uncertainty, such as standard errors and confidence intervals for the treatment effect. We propose an approach to estimation of sampling variation of estimated treatment effect and show its superior performance relative to that of existing methods. PMID:22733628

  18. Development and Application of Nonlinear Land-Use Regression Models

    NASA Astrophysics Data System (ADS)

    Champendal, Alexandre; Kanevski, Mikhail; Huguenot, Pierre-Emmanuel

    2014-05-01

    The problem of air pollution modelling in urban zones is of great importance both from scientific and applied points of view. At present there are several fundamental approaches either based on science-based modelling (air pollution dispersion) or on the application of space-time geostatistical methods (e.g. family of kriging models or conditional stochastic simulations). Recently, there were important developments in so-called Land Use Regression (LUR) models. These models take into account geospatial information (e.g. traffic network, sources of pollution, average traffic, population census, land use, etc.) at different scales, for example, using buffering operations. Usually the dimension of the input space (number of independent variables) is within the range of (10-100). It was shown that LUR models have some potential to model complex and highly variable patterns of air pollution in urban zones. Most of LUR models currently used are linear models. In the present research the nonlinear LUR models are developed and applied for Geneva city. Mainly two nonlinear data-driven models were elaborated: multilayer perceptron and random forest. An important part of the research deals also with a comprehensive exploratory data analysis using statistical, geostatistical and time series tools. Unsupervised self-organizing maps were applied to better understand space-time patterns of the pollution. The real data case study deals with spatial-temporal air pollution data of Geneva (2002-2011). Nitrogen dioxide (NO2) has caught our attention. It has effects on human health and on plants; NO2 contributes to the phenomenon of acid rain. The negative effects of nitrogen dioxides on plants are the reduction of the growth, production and pesticide resistance. And finally, the effects on materials: nitrogen dioxide increases the corrosion. The data used for this study consist of a set of 106 NO2 passive sensors. 80 were used to build the models and the remaining 36 have constituted

  19. Linear and nonlinear structural identifications using the support vector regression

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Sato, Tadanobu

    2006-03-01

    Robust and efficient identification methods are necessary to study in the structural health monitoring field, especially when the I/O data are accompanied by high-level noise and the structure studied is a large-scale one. The Support vector Regression (SVR) is a promising nonlinear modeling method that has been found working very well in many fields, and has a powerful potential to be applied in system identifications. The SVR-based methods are provided in this article to make linear large-scale structural identification and nonlinear hysteretic structural identifications. The LS estimator is a cornerstone of statistics but less robust to outliers. Instead of the classical Gaussian loss function without regularization used in the LS method, a novel e-insensitive loss function is employed in the SVR. Meanwhile, the SVR adopts the 'max-margined' idea to search for an optimum hyper-plane separating the training data into two subsets by maximizing the margin between them. Therefore, the SVR-based structural identification approach is robust and accuracy even though the observation data involve different kinds and high-level noise. By means of the local strategy, the linear large-scale structural identification approach based on the SVR is first investigated. The novel SVR can identify structural parameters directly by writing structural observation equations in linear equations with respect to unknown structural parameters. Furthermore, the substrutural idea employed reduces the number of unknown parameters seriously to guarantee the SVR work in a low dimension and to focus the identification on a local arbitrary subsystem. It is crucial to make nonlinear structural identification also, because structures exhibit highly nonlinear characters under severe loads such as strong seismic excitations. The Bouc-Wen model is often utilized to describe structural nonlinear properties, the power parameter of the model however is often assumed as known even though it is unknown

  20. Nonlinear Identification Using Orthogonal Forward Regression With Nested Optimal Regularization.

    PubMed

    Hong, Xia; Chen, Sheng; Gao, Junbin; Harris, Chris J

    2015-12-01

    An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.

  1. Fast nonlinear regression method for CT brain perfusion analysis.

    PubMed

    Bennink, Edwin; Oosterbroek, Jaap; Kudo, Kohsuke; Viergever, Max A; Velthuis, Birgitta K; de Jong, Hugo W A M

    2016-04-01

    Although computed tomography (CT) perfusion (CTP) imaging enables rapid diagnosis and prognosis of ischemic stroke, current CTP analysis methods have several shortcomings. We propose a fast nonlinear regression method with a box-shaped model (boxNLR) that has important advantages over the current state-of-the-art method, block-circulant singular value decomposition (bSVD). These advantages include improved robustness to attenuation curve truncation, extensibility, and unified estimation of perfusion parameters. The method is compared with bSVD and with a commercial SVD-based method. The three methods were quantitatively evaluated by means of a digital perfusion phantom, described by Kudo et al. and qualitatively with the aid of 50 clinical CTP scans. All three methods yielded high Pearson correlation coefficients ([Formula: see text]) with the ground truth in the phantom. The boxNLR perfusion maps of the clinical scans showed higher correlation with bSVD than the perfusion maps from the commercial method. Furthermore, it was shown that boxNLR estimates are robust to noise, truncation, and tracer delay. The proposed method provides a fast and reliable way of estimating perfusion parameters from CTP scans. This suggests it could be a viable alternative to current commercial and academic methods. PMID:27413770

  2. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    ERIC Educational Resources Information Center

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…

  3. Comparison between Linear and Nonlinear Regression in a Laboratory Heat Transfer Experiment

    ERIC Educational Resources Information Center

    Gonçalves, Carine Messias; Schwaab, Marcio; Pinto, José Carlos

    2013-01-01

    In order to interpret laboratory experimental data, undergraduate students are used to perform linear regression through linearized versions of nonlinear models. However, the use of linearized models can lead to statistically biased parameter estimates. Even so, it is not an easy task to introduce nonlinear regression and show for the students…

  4. Confidence region estimation techniques for nonlinear regression :three case studies.

    SciTech Connect

    Swiler, Laura Painton (Sandia National Laboratories, Albuquerque, NM); Sullivan, Sean P. (University of Texas, Austin, TX); Stucky-Mack, Nicholas J. (Harvard University, Cambridge, MA); Roberts, Randall Mark; Vugrin, Kay White

    2005-10-01

    This work focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F-test method, and a Log-Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well-test results. The confidence regions for each case study are analyzed and compared. Although the F-test and Log-Likelihood methods result in similar regions, there are differences between these regions and the regions generated by the linear approximation method for nonlinear problems. The differing results, capabilities, and drawbacks of all three methods are discussed.

  5. Predicting dissolved oxygen concentration using kernel regression modeling approaches with nonlinear hydro-chemical data.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali

    2014-05-01

    Kernel function-based regression models were constructed and applied to a nonlinear hydro-chemical dataset pertaining to surface water for predicting the dissolved oxygen levels. Initial features were selected using nonlinear approach. Nonlinearity in the data was tested using BDS statistics, which revealed the data with nonlinear structure. Kernel ridge regression, kernel principal component regression, kernel partial least squares regression, and support vector regression models were developed using the Gaussian kernel function and their generalization and predictive abilities were compared in terms of several statistical parameters. Model parameters were optimized using the cross-validation procedure. The proposed kernel regression methods successfully captured the nonlinear features of the original data by transforming it to a high dimensional feature space using the kernel function. Performance of all the kernel-based modeling methods used here were comparable both in terms of predictive and generalization abilities. Values of the performance criteria parameters suggested for the adequacy of the constructed models to fit the nonlinear data and their good predictive capabilities. PMID:24338099

  6. Head Pose Estimation by a Stepwise Nonlinear Regression

    NASA Astrophysics Data System (ADS)

    Bailly, Kevin; Milgram, Maurice; Phothisane, Philippe

    Head pose estimation is a crucial step for numerous face applications such as gaze tracking and face recognition. In this paper, we introduce a new method to learn the mapping between a set of features and the corresponding head pose. It combines a filter based feature selection and a Generalized Regression Neural Network where inputs are sequentially selected through a boosting process. We propose the Fuzzy Functional Criterion, a new filter used to select relevant features. At each step, features are evaluated using weights on examples computed using the error produced by the neural network at the previous step. This boosting strategy helps to focus on hard examples and selects a set of complementary features. Results are compared with three state-of-the-art methods on the Pointing 04 database.

  7. Covariate adjustment for two-sample treatment comparisons in randomized clinical trials: a principled yet flexible approach.

    PubMed

    Tsiatis, Anastasios A; Davidian, Marie; Zhang, Min; Lu, Xiaomin

    2008-10-15

    There is considerable debate regarding whether and how covariate-adjusted analyses should be used in the comparison of treatments in randomized clinical trials. Substantial baseline covariate information is routinely collected in such trials, and one goal of adjustment is to exploit covariates associated with outcome to increase precision of estimation of the treatment effect. However, concerns are routinely raised over the potential for bias when the covariates used are selected post hoc and the potential for adjustment based on a model of the relationship between outcome, covariates, and treatment to invite a 'fishing expedition' for that leading to the most dramatic effect estimate. By appealing to the theory of semiparametrics, we are led naturally to a characterization of all treatment effect estimators and to principled, practically feasible methods for covariate adjustment that yield the desired gains in efficiency and that allow covariate relationships to be identified and exploited while circumventing the usual concerns. The methods and strategies for their implementation in practice are presented. Simulation studies and an application to data from an HIV clinical trial demonstrate the performance of the techniques relative to the existing methods. PMID:17960577

  8. Incorporation of prior information on parameters into nonlinear regression groundwater flow models. l. Theory.

    USGS Publications Warehouse

    Cooley, R.L.

    1982-01-01

    Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: 1) prior information having known reliability (that is, bias and random error structure), and 2) prior information consisting of best available estimates of unknown reliability. It is shown that if both scales of prior information are available, then a combined regression analysis may be made. -from Author

  9. An Investigation of the Effect of Nonlinearity of Regression on the ANCOVA F Test.

    ERIC Educational Resources Information Center

    Harwell, Michael

    The effect of a nonlinear regression term on the behavior of the standard analysis of covariance (ANCOVA) F test was investigated for balanced and randomized designs through a Monte Carlo study. The results indicate that the use of the standard analysis of covariance model when a quadratic term is present has little effect on Type I error rates…

  10. A stepwise regression tree for nonlinear approximation: applications to estimating subpixel land cover

    USGS Publications Warehouse

    Huang, C.; Townshend, J.R.G.

    2003-01-01

    A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.

  11. Nonlinear logistic regression model for outcomes after endourologic procedures: a novel predictor.

    PubMed

    Kadlec, Adam O; Ohlander, Samuel; Hotaling, James; Hannick, Jessica; Niederberger, Craig; Turk, Thomas M

    2014-08-01

    The purpose of this study was to design a thorough and practical nonlinear logistic regression model that can be used for outcome prediction after various forms of endourologic intervention. Input variables and outcome data from 382 renal units endourologically treated at a single institution were used to build and cross-validate an independently designed nonlinear logistic regression model. Model outcomes were stone-free status and need for a secondary procedure. The model predicted stone-free status with sensitivity 75.3% and specificity 60.4%, yielding a positive predictive value (PPV) of 75.3% and negative predictive value (NPV) of 60.4%, with classification accuracy of 69.6%. Receiver operating characteristic area under the curve (ROC AUC) was 0.749. The model predicted the need for a secondary procedure with sensitivity 30% and specificity 98.3%, yielding a PPV of 60% and NPV of 94.2%. ROC AUC was 0.863. The model had equivalent predictive value to a traditional logistic regression model for the secondary procedure outcome. This study is proof-of-concept that a nonlinear regression model adequately predicts key clinical outcomes after shockwave lithotripsy, ureteroscopic lithotripsy, and percutaneous nephrolithotomy. This model holds promise for further optimization via dataset expansion, preferably with multi-institutional data, and could be developed into a predictive nomogram in the future.

  12. A comparison of several methods of solving nonlinear regression groundwater flow problems.

    USGS Publications Warehouse

    Cooley, R.L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems.-from Author

  13. Multilayer Perceptron for Robust Nonlinear Interval Regression Analysis Using Genetic Algorithms

    PubMed Central

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets. PMID:25110755

  14. A Nonlinear Causality Estimator Based on Non-Parametric Multiplicative Regression

    PubMed Central

    Nicolaou, Nicoletta; Constandinou, Timothy G.

    2016-01-01

    Causal prediction has become a popular tool for neuroscience applications, as it allows the study of relationships between different brain areas during rest, cognitive tasks or brain disorders. We propose a nonparametric approach for the estimation of nonlinear causal prediction for multivariate time series. In the proposed estimator, CNPMR, Autoregressive modeling is replaced by Nonparametric Multiplicative Regression (NPMR). NPMR quantifies interactions between a response variable (effect) and a set of predictor variables (cause); here, we modified NPMR for model prediction. We also demonstrate how a particular measure, the sensitivity Q, could be used to reveal the structure of the underlying causal relationships. We apply CNPMR on artificial data with known ground truth (5 datasets), as well as physiological data (2 datasets). CNPMR correctly identifies both linear and nonlinear causal connections that are present in the artificial data, as well as physiologically relevant connectivity in the real data, and does not seem to be affected by filtering. The Sensitivity measure also provides useful information about the latent connectivity.The proposed estimator addresses many of the limitations of linear Granger causality and other nonlinear causality estimators. CNPMR is compared with pairwise and conditional Granger causality (linear) and Kernel-Granger causality (nonlinear). The proposed estimator can be applied to pairwise or multivariate estimations without any modifications to the main method. Its nonpametric nature, its ability to capture nonlinear relationships and its robustness to filtering make it appealing for a number of applications. PMID:27378901

  15. A Nonlinear Causality Estimator Based on Non-Parametric Multiplicative Regression.

    PubMed

    Nicolaou, Nicoletta; Constandinou, Timothy G

    2016-01-01

    Causal prediction has become a popular tool for neuroscience applications, as it allows the study of relationships between different brain areas during rest, cognitive tasks or brain disorders. We propose a nonparametric approach for the estimation of nonlinear causal prediction for multivariate time series. In the proposed estimator, C NPMR , Autoregressive modeling is replaced by Nonparametric Multiplicative Regression (NPMR). NPMR quantifies interactions between a response variable (effect) and a set of predictor variables (cause); here, we modified NPMR for model prediction. We also demonstrate how a particular measure, the sensitivity Q, could be used to reveal the structure of the underlying causal relationships. We apply C NPMR on artificial data with known ground truth (5 datasets), as well as physiological data (2 datasets). C NPMR correctly identifies both linear and nonlinear causal connections that are present in the artificial data, as well as physiologically relevant connectivity in the real data, and does not seem to be affected by filtering. The Sensitivity measure also provides useful information about the latent connectivity.The proposed estimator addresses many of the limitations of linear Granger causality and other nonlinear causality estimators. C NPMR is compared with pairwise and conditional Granger causality (linear) and Kernel-Granger causality (nonlinear). The proposed estimator can be applied to pairwise or multivariate estimations without any modifications to the main method. Its nonpametric nature, its ability to capture nonlinear relationships and its robustness to filtering make it appealing for a number of applications.

  16. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    SciTech Connect

    Harlim, John; Mahdi, Adam; Majda, Andrew J.

    2014-01-15

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.

  17. Aboveground biomass and carbon stocks modelling using non-linear regression model

    NASA Astrophysics Data System (ADS)

    Ain Mohd Zaki, Nurul; Abd Latif, Zulkiflee; Nazip Suratman, Mohd; Zainee Zainal, Mohd

    2016-06-01

    Aboveground biomass (AGB) is an important source of uncertainty in the carbon estimation for the tropical forest due to the variation biodiversity of species and the complex structure of tropical rain forest. Nevertheless, the tropical rainforest holds the most extensive forest in the world with the vast diversity of tree with layered canopies. With the usage of optical sensor integrate with empirical models is a common way to assess the AGB. Using the regression, the linkage between remote sensing and a biophysical parameter of the forest may be made. Therefore, this paper exemplifies the accuracy of non-linear regression equation of quadratic function to estimate the AGB and carbon stocks for the tropical lowland Dipterocarp forest of Ayer Hitam forest reserve, Selangor. The main aim of this investigation is to obtain the relationship between biophysical parameter field plots with the remotely-sensed data using nonlinear regression model. The result showed that there is a good relationship between crown projection area (CPA) and carbon stocks (CS) with Pearson Correlation (p < 0.01), the coefficient of correlation (r) is 0.671. The study concluded that the integration of Worldview-3 imagery with the canopy height model (CHM) raster based LiDAR were useful in order to quantify the AGB and carbon stocks for a larger sample area of the lowland Dipterocarp forest.

  18. KINETIC ANALYSIS OF HIGH-NITROGEN ENERGETIC MATERIALS USING MULTIVARIATE NONLINEAR REGRESSION

    SciTech Connect

    Campbell, M. S.; Rabie, R. L.; Diaz-Acosta, I.; Pulay, P.

    2001-01-01

    New high-nitrogen energetic materials were synthesized by Hiskey and Naud. J. Opfermann reported a new tool for finding the probable model of the complex reactions using multivariate non-linear regression analysis of DSC and TGA data from several measurements run at different heating rates. This study is to take the kinetic parameters from the different steps and discover which reaction step is responsible for the runaway reaction by comparing predicted results from the Frank-Kamenetsckii equation with the critical temperature found experimentally using the modified Henkin test.

  19. Inference of dense spectral reflectance images from sparse reflectance measurement using non-linear regression modeling

    NASA Astrophysics Data System (ADS)

    Deglint, Jason; Kazemzadeh, Farnoud; Wong, Alexander; Clausi, David A.

    2015-09-01

    One method to acquire multispectral images is to sequentially capture a series of images where each image contains information from a different bandwidth of light. Another method is to use a series of beamsplitters and dichroic filters to guide different bandwidths of light onto different cameras. However, these methods are very time consuming and expensive and perform poorly in dynamic scenes or when observing transient phenomena. An alternative strategy to capturing multispectral data is to infer this data using sparse spectral reflectance measurements captured using an imaging device with overlapping bandpass filters, such as a consumer digital camera using a Bayer filter pattern. Currently the only method of inferring dense reflectance spectra is the Wiener adaptive filter, which makes Gaussian assumptions about the data. However, these assumptions may not always hold true for all data. We propose a new technique to infer dense reflectance spectra from sparse spectral measurements through the use of a non-linear regression model. The non-linear regression model used in this technique is the random forest model, which is an ensemble of decision trees and trained via the spectral characterization of the optical imaging system and spectral data pair generation. This model is then evaluated by spectrally characterizing different patches on the Macbeth color chart, as well as by reconstructing inferred multispectral images. Results show that the proposed technique can produce inferred dense reflectance spectra that correlate well with the true dense reflectance spectra, which illustrates the merits of the technique.

  20. Evaluation of absolute quantitation by nonlinear regression in probe-based real-time PCR

    PubMed Central

    Goll, Rasmus; Olsen, Trine; Cui, Guanglin; Florholmen, Jon

    2006-01-01

    Background In real-time PCR data analysis, the cycle threshold (CT) method is currently the gold standard. This method is based on an assumption of equal PCR efficiency in all reactions, and precision may suffer if this condition is not met. Nonlinear regression analysis (NLR) or curve fitting has therefore been suggested as an alternative to the cycle threshold method for absolute quantitation. The advantages of NLR are that the individual sample efficiency is simulated by the model and that absolute quantitation is possible without a standard curve, releasing reaction wells for unknown samples. However, the calculation method has not been evaluated systematically and has not previously been applied to a TaqMan platform. Aim: To develop and evaluate an automated NLR algorithm capable of generating batch production regression analysis. Results Total RNA samples extracted from human gastric mucosa were reverse transcribed and analysed for TNFA, IL18 and ACTB by TaqMan real-time PCR. Fluorescence data were analysed by the regular CT method with a standard curve, and by NLR with a positive control for conversion of fluorescence intensity to copy number, and for this purpose an automated algorithm was written in SPSS syntax. Eleven separate regression models were tested, and the output data was subjected to Altman-Bland analysis. The Altman-Bland analysis showed that the best regression model yielded quantitative data with an intra-assay variation of 58% vs. 24% for the CT derived copy numbers, and with a mean inter-method deviation of × 0.8. Conclusion NLR can be automated for batch production analysis, but the CT method is more precise for absolute quantitation in the present setting. The observed inter-method deviation is an indication that assessment of the fluorescence conversion factor used in the regression method can be improved. However, the versatility depends on the level of precision required, and in some settings the increased cost effectiveness of NLR

  1. Linear mixed-effect multivariate adaptive regression splines applied to nonlinear pharmacokinetics data.

    PubMed

    Gries, J M; Verotta, D

    2000-08-01

    In a frequently performed pharmacokinetics study, different subjects are given different doses of a drug. After each dose is given, drug concentrations are observed according to the same sampling design. The goal of the experiment is to obtain a representation for the pharmacokinetics of the drug, and to determine if drug concentrations observed at different times after a dose are linear in respect to dose. The goal of this paper is to obtain a representation for concentration as a function of time and dose, which (a) makes no assumptions on the underlying pharmacokinetics of the drug; (b) takes into account the repeated measure structure of the data; and (c) detects nonlinearities in respect to dose. To address (a) we use a multivariate adaptive regression splines representation (MARS), which we recast into a linear mixed-effects model, addressing (b). To detect nonlinearity we describe a general algorithm that obtains nested (mixed-effect) MARS representations. In the pharmacokinetics application, the algorithm obtains representations containing time, and time and dose, respectively, with the property that the bases functions of the first representation are a subset of the second. Standard statistical model selection criteria are used to select representations linear or nonlinear in respect to dose. The method can be applied to a variety of pharmacokinetics (and pharmacodynamic) preclinical and phase I-III trials. Examples of applications of the methodology to real and simulated data are reported.

  2. Nonlinear regression modeling of nutrient loads in streams: A Bayesian approach

    USGS Publications Warehouse

    Qian, S.S.; Reckhow, K.H.; Zhai, J.; McMahon, G.

    2005-01-01

    A Bayesian nonlinear regression modeling method is introduced and compared with the least squares method for modeling nutrient loads in stream networks. The objective of the study is to better model spatial correlation in river basin hydrology and land use for improving the model as a forecasting tool. The Bayesian modeling approach is introduced in three steps, each with a more complicated model and data error structure. The approach is illustrated using a data set from three large river basins in eastern North Carolina. Results indicate that the Bayesian model better accounts for model and data uncertainties than does the conventional least squares approach. Applications of the Bayesian models for ambient water quality standards compliance and TMDL assessment are discussed. Copyright 2005 by the American Geophysical Union.

  3. Frequency-domain nonlinear regression algorithm for spectral analysis of broadband SFG spectroscopy.

    PubMed

    He, Yuhan; Wang, Ying; Wang, Jingjing; Guo, Wei; Wang, Zhaohui

    2016-03-01

    The resonant spectral bands of the broadband sum frequency generation (BB-SFG) spectra are often distorted by the nonresonant portion and the lineshapes of the laser pulses. Frequency domain nonlinear regression (FDNLR) algorithm was proposed to retrieve the first-order polarization induced by the infrared pulse and to improve the analysis of SFG spectra through simultaneous fitting of a series of time-resolved BB-SFG spectra. The principle of FDNLR was presented, and the validity and reliability were tested by the analysis of the virtual and measured SFG spectra. The relative phase, dephasing time, and lineshapes of the resonant vibrational SFG bands can be retrieved without any preset assumptions about the SFG bands and the incident laser pulses. PMID:26974068

  4. Estimate error of frequency-dependent Q introduced by linear regression and its nonlinear implementation

    NASA Astrophysics Data System (ADS)

    Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing

    2016-02-01

    The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent γ and the ratio of the bandwidth to the central frequency σ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent γ , and less affected by the ratio σ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.

  5. A nonlinear regression approach to test for size-dependence of competitive ability.

    PubMed

    Lamb, Eric G; Cahill, James F; Dale, Mark R T

    2006-06-01

    An individual's competitive ability is often dependent on its size, but the methods commonly used to analyze plant competition experiments generally assume that the outcome of interactions are size independent. A method for the analysis of experiments with paired competition treatments based on nonlinear regression with a power function is presented. This method allows straightforward tests of whether a competitive interaction is size dependent, and for the significance of experimental treatments. The method is applied to three example data sets: (1) an experiment where pairs of plants were grown with and without competition at five fertilization levels, (2) an experiment where the fecundity of two snail species were compared between environments at two densities, and (3) an addition series experiment where two plant species were grown in proportional mixtures at several densities. Competitive ability was size-dependent in two of these examples, which demonstrates that a wide range of ecologically important information can be lost when the assumption of size-dependence is ignored. Regression with a power curve should always be used to test whether competitive interactions are size independent, and for the further analysis of size-dependent interactions. PMID:16869420

  6. A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression

    PubMed Central

    Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin

    2012-01-01

    To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.

  7. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws

    USGS Publications Warehouse

    Xiao, X.; White, E.P.; Hooten, M.B.; Durham, S.L.

    2011-01-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain. ?? 2011 by the Ecological Society of America.

  8. Non-linear regression model for spatial variation in precipitation chemistry for South India

    NASA Astrophysics Data System (ADS)

    Siva Soumya, B.; Sekhar, M.; Riotte, J.; Braun, Jean-Jacques

    Chemical composition of rainwater changes from sea to inland under the influence of several major factors - topographic location of area, its distance from sea, annual rainfall. A model is developed here to quantify the variation in precipitation chemistry under the influence of inland distance and rainfall amount. Various sites in India categorized as 'urban', 'suburban' and 'rural' have been considered for model development. pH, HCO 3, NO 3 and Mg do not change much from coast to inland while, SO 4 and Ca change is subjected to local emissions. Cl and Na originate solely from sea salinity and are the chemistry parameters in the model. Non-linear multiple regressions performed for the various categories revealed that both rainfall amount and precipitation chemistry obeyed a power law reduction with distance from sea. Cl and Na decrease rapidly for the first 100 km distance from sea, then decrease marginally for the next 100 km, and later stabilize. Regression parameters estimated for different cases were found to be consistent ( R2 ˜ 0.8). Variation in one of the parameters accounted for urbanization. Model was validated using data points from the southern peninsular region of the country. Estimates are found to be within 99.9% confidence interval. Finally, this relationship between the three parameters - rainfall amount, coastline distance, and concentration (in terms of Cl and Na) was validated with experiments conducted in a small experimental watershed in the south-west India. Chemistry estimated using the model was in good correlation with observed values with a relative error of ˜5%. Monthly variation in the chemistry is predicted from a downscaling model and then compared with the observed data. Hence, the model developed for rain chemistry is useful in estimating the concentrations at different spatio-temporal scales and is especially applicable for south-west region of India.

  9. A Bayesian Nonlinear Mixed-Effects Regression Model for the Characterization of Early Bactericidal Activity of Tuberculosis Drugs

    PubMed Central

    Burger, Divan Aristo; Schall, Robert

    2015-01-01

    Trials of the early bactericidal activity (EBA) of tuberculosis (TB) treatments assess the decline, during the first few days to weeks of treatment, in colony forming unit (CFU) count of Mycobacterium tuberculosis in the sputum of patients with smear-microscopy-positive pulmonary TB. Profiles over time of CFU data have conventionally been modeled using linear, bilinear, or bi-exponential regression. We propose a new biphasic nonlinear regression model for CFU data that comprises linear and bilinear regression models as special cases and is more flexible than bi-exponential regression models. A Bayesian nonlinear mixed-effects (NLME) regression model is fitted jointly to the data of all patients from a trial, and statistical inference about the mean EBA of TB treatments is based on the Bayesian NLME regression model. The posterior predictive distribution of relevant slope parameters of the Bayesian NLME regression model provides insight into the nature of the EBA of TB treatments; specifically, the posterior predictive distribution allows one to judge whether treatments are associated with monolinear or bilinear decline of log(CFU) count, and whether CFU count initially decreases fast, followed by a slower rate of decrease, or vice versa. PMID:25322214

  10. Using Recursive Regression to Explore Nonlinear Relationships and Interactions: A Tutorial Applied to a Multicultural Education Study

    ERIC Educational Resources Information Center

    Strang, Kenneth David

    2009-01-01

    This paper discusses how a seldom-used statistical procedure, recursive regression (RR), can numerically and graphically illustrate data-driven nonlinear relationships and interaction of variables. This routine falls into the family of exploratory techniques, yet a few interesting features make it a valuable compliment to factor analysis and…

  11. Statistically Differentiating between Interaction and Nonlinearity in Multiple Regression Analysis: A Monte Carlo Investigation of a Recommended Strategy.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Foster-Johnson, Lynn

    1999-01-01

    Shows that the procedure recommended by D. Lubinski and L. Humphreys (1990) for differentiating between moderated and nonlinear regression models evidences statistical problems characteristic of stepwise procedures. Interprets Monte Carlo results in terms of the researchers' need to differentiate between exploratory and confirmatory aspects of…

  12. A comparative study between nonlinear regression and artificial neural network approaches for modelling wild oat (Avena fatua) field emergence

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...

  13. A comparative study between non-linear regression and non-parametric approaches for modelling Phalaris paradoxa seedling emergence

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...

  14. A fast nonlinear regression method for estimating permeability in CT perfusion imaging

    PubMed Central

    Bennink, Edwin; Riordan, Alan J; Horsch, Alexander D; Dankbaar, Jan Willem; Velthuis, Birgitta K; de Jong, Hugo W

    2013-01-01

    Blood–brain barrier damage, which can be quantified by measuring vascular permeability, is a potential predictor for hemorrhagic transformation in acute ischemic stroke. Permeability is commonly estimated by applying Patlak analysis to computed tomography (CT) perfusion data, but this method lacks precision. Applying more elaborate kinetic models by means of nonlinear regression (NLR) may improve precision, but is more time consuming and therefore less appropriate in an acute stroke setting. We propose a simplified NLR method that may be faster and still precise enough for clinical use. The aim of this study is to evaluate the reliability of in total 12 variations of Patlak analysis and NLR methods, including the simplified NLR method. Confidence intervals for the permeability estimates were evaluated using simulated CT attenuation–time curves with realistic noise, and clinical data from 20 patients. Although fixating the blood volume improved Patlak analysis, the NLR methods yielded significantly more reliable estimates, but took up to 12 × longer to calculate. The simplified NLR method was ∼4 × faster than other NLR methods, while maintaining the same confidence intervals (CIs). In conclusion, the simplified NLR method is a new, reliable way to estimate permeability in stroke, fast enough for clinical application in an acute stroke setting. PMID:23881247

  15. Growth curve by Gompertz nonlinear regression model in female and males in tambaqui (Colossoma macropomum).

    PubMed

    De Mello, Fernanda; Oliveira, Carlos A L; Ribeiro, Ricardo P; Resende, Emiko K; Povh, Jayme A; Fornari, Darci C; Barreto, Rogério V; McManus, Concepta; Streit, Danilo

    2015-01-01

    Was evaluated the pattern of growth among females and males of tambaqui by Gompertz nonlinear regression model. Five traits of economic importance were measured on 145 animals during the three years, totaling 981 morphometric data analyzed. Different curves were adjusted between males and females for body weight, height and head length and only one curve was adjusted to the width and body length. The asymptotic weight (a) and relative growth rate to maturity (k) were different between sexes in animals with ± 5 kg; slaughter weight practiced by a specific niche market, very profitable. However, there was no difference between males and females up to ± 2 kg; slaughter weight established to supply the bigger consumer market. Females showed weight greater than males (± 280 g), which are more suitable for fish farming purposes defined for the niche market to larger animals. In general, males had lower maximum growth rate (8.66 g / day) than females (9.34 g / day), however, reached faster than females, 476 and 486 days growth rate, respectively. The height and length body are the traits that contributed most to the weight at 516 days (P <0.001).

  16. Nonlinear regression and ARIMA models for precipitation chemistry in East Central Florida from 1978 to 1997.

    PubMed

    Nickerson, David M; Madsen, Brooks C

    2005-06-01

    Continuous monitoring of precipitation in East Central Florida has occurred since 1978 at a sampling site located on the University of Central Florida (UCF) campus. Monthly volume-weighted average (VWA) concentration for several major analytes that are present in precipitation samples was calculated from samples collected daily. Monthly VWA concentration and wet deposition of H(+), NH(4)(+), Ca(2+), Mg(2+), NO(3)(-), Cl(-) and SO(4)(2-) were evaluated by a nonlinear regression (NLR) model that considered 10-year data (from 1978 to 1987) and 20-year data (from 1978 to 1997). Little change in the NLR parameter estimates was indicated among the 10-year and 20-year evaluations except for general decreases in the predicted trends from the 10-year to the 20-year fits. Box-Jenkins autoregressive integrated moving average (ARIMA) models with linear trend were considered as an alternative to the NLR models for these data. The NLR and ARIMA model forecasts for 1998 were compared to the actual 1998 data. For monthly VWA concentration values, the two models gave similar results. For the wet deposition values, the ARIMA models performed considerably better.

  17. Growth curve by Gompertz nonlinear regression model in female and males in tambaqui (Colossoma macropomum).

    PubMed

    De Mello, Fernanda; Oliveira, Carlos A L; Ribeiro, Ricardo P; Resende, Emiko K; Povh, Jayme A; Fornari, Darci C; Barreto, Rogério V; McManus, Concepta; Streit, Danilo

    2015-01-01

    Was evaluated the pattern of growth among females and males of tambaqui by Gompertz nonlinear regression model. Five traits of economic importance were measured on 145 animals during the three years, totaling 981 morphometric data analyzed. Different curves were adjusted between males and females for body weight, height and head length and only one curve was adjusted to the width and body length. The asymptotic weight (a) and relative growth rate to maturity (k) were different between sexes in animals with ± 5 kg; slaughter weight practiced by a specific niche market, very profitable. However, there was no difference between males and females up to ± 2 kg; slaughter weight established to supply the bigger consumer market. Females showed weight greater than males (± 280 g), which are more suitable for fish farming purposes defined for the niche market to larger animals. In general, males had lower maximum growth rate (8.66 g / day) than females (9.34 g / day), however, reached faster than females, 476 and 486 days growth rate, respectively. The height and length body are the traits that contributed most to the weight at 516 days (P <0.001). PMID:26628036

  18. Trace analysis of acids and bases by conductometric titration with multiparametric non-linear regression.

    PubMed

    Coelho, Lúcia H G; Gutz, Ivano G R

    2006-03-15

    A chemometric method for analysis of conductometric titration data was introduced to extend its applicability to lower concentrations and more complex acid-base systems. Auxiliary pH measurements were made during the titration to assist the calculation of the distribution of protonable species on base of known or guessed equilibrium constants. Conductivity values of each ionized or ionizable species possibly present in the sample were introduced in a general equation where the only unknown parameters were the total concentrations of (conjugated) bases and of strong electrolytes not involved in acid-base equilibria. All these concentrations were adjusted by a multiparametric nonlinear regression (NLR) method, based on the Levenberg-Marquardt algorithm. This first conductometric titration method with NLR analysis (CT-NLR) was successfully applied to simulated conductometric titration data and to synthetic samples with multiple components at concentrations as low as those found in rainwater (approximately 10 micromol L(-1)). It was possible to resolve and quantify mixtures containing a strong acid, formic acid, acetic acid, ammonium ion, bicarbonate and inert electrolyte with accuracy of 5% or better.

  19. Variable Selection for Sparse High-Dimensional Nonlinear Regression Models by Combining Nonnegative Garrote and Sure Independence Screening

    PubMed Central

    Xue, Hongqi; Wu, Yichao; Wu, Hulin

    2013-01-01

    In many regression problems, the relations between the covariates and the response may be nonlinear. Motivated by the application of reconstructing a gene regulatory network, we consider a sparse high-dimensional additive model with the additive components being some known nonlinear functions with unknown parameters. To identify the subset of important covariates, we propose a new method for simultaneous variable selection and parameter estimation by iteratively combining a large-scale variable screening (the nonlinear independence screening, NLIS) and a moderate-scale model selection (the nonnegative garrote, NNG) for the nonlinear additive regressions. We have shown that the NLIS procedure possesses the sure screening property and it is able to handle problems with non-polynomial dimensionality; and for finite dimension problems, the NNG for the nonlinear additive regressions has selection consistency for the unimportant covariates and also estimation consistency for the parameter estimates of the important covariates. The proposed method is applied to simulated data and a real data example for identifying gene regulations to illustrate its numerical performance. PMID:25170239

  20. The mechanical properties of high speed GTAW weld and factors of nonlinear multiple regression model under external transverse magnetic field

    NASA Astrophysics Data System (ADS)

    Lu, Lin; Chang, Yunlong; Li, Yingmin; He, Youyou

    2013-05-01

    A transverse magnetic field was introduced to the arc plasma in the process of welding stainless steel tubes by high-speed Tungsten Inert Gas Arc Welding (TIG for short) without filler wire. The influence of external magnetic field on welding quality was investigated. 9 sets of parameters were designed by the means of orthogonal experiment. The welding joint tensile strength and form factor of weld were regarded as the main standards of welding quality. A binary quadratic nonlinear regression equation was established with the conditions of magnetic induction and flow rate of Ar gas. The residual standard deviation was calculated to adjust the accuracy of regression model. The results showed that, the regression model was correct and effective in calculating the tensile strength and aspect ratio of weld. Two 3D regression models were designed respectively, and then the impact law of magnetic induction on welding quality was researched.

  1. Comparison of Linear and Non-Linear Regression Models to Estimate Leaf Area Index of Dryland Shrubs.

    NASA Astrophysics Data System (ADS)

    Dashti, H.; Glenn, N. F.; Ilangakoon, N. T.; Mitchell, J.; Dhakal, S.; Spaete, L.

    2015-12-01

    Leaf area index (LAI) is a key parameter in global ecosystem studies. LAI is considered a forcing variable in land surface processing models since ecosystem dynamics are highly correlated to LAI. In response to environmental limitations, plants in semiarid ecosystems have smaller leaf area, making accurate estimation of LAI by remote sensing a challenging issue. Optical remote sensing (400-2500 nm) techniques to estimate LAI are based either on radiative transfer models (RTMs) or statistical approaches. Considering the complex radiation field of dry ecosystems, simple 1-D RTMs lead to poor results, and on the other hand, inversion of more complex 3-D RTMs is a demanding task which requires the specification of many variables. A good alternative to physical approaches is using methods based on statistics. Similar to many natural phenomena, there is a non-linear relationship between LAI and top of canopy electromagnetic waves reflected to optical sensors. Non-linear regression models can better capture this relationship. However, considering the problem of a few numbers of observations in comparison to the feature space (nnon-linear regression techniques were investigated to estimate LAI. Our study area is located in southwestern Idaho, Great Basin. Sagebrush (Artemisia tridentata spp) serves a critical role in maintaining the structure of this ecosystem. Using a leaf area meter (Accupar LP-80), LAI values were measured in the field. Linear Partial Least Square regression and non-linear, tree based Random Forest regression have been implemented to estimate the LAI of sagebrush from hyperspectral data (AVIRIS-ng) collected in late summer 2014. Cross validation of results indicate that PLS can provide comparable results to Random Forest.

  2. Nonlinear regression in environmental sciences by support vector machines combined with evolutionary strategy

    NASA Astrophysics Data System (ADS)

    Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.

    2013-01-01

    A hybrid algorithm combining support vector regression with evolutionary strategy (SVR-ES) is proposed for predictive models in the environmental sciences. SVR-ES uses uncorrelated mutation with p step sizes to find the optimal SVR hyper-parameters. Three environmental forecast datasets used in the WCCI-2006 contest - surface air temperature, precipitation and sulphur dioxide concentration - were tested. We used multiple linear regression (MLR) as benchmark and a variety of machine learning techniques including bootstrap-aggregated ensemble artificial neural network (ANN), SVR-ES, SVR with hyper-parameters given by the Cherkassky-Ma estimate, the M5 regression tree, and random forest (RF). We also tested all techniques using stepwise linear regression (SLR) first to screen out irrelevant predictors. We concluded that SVR-ES is an attractive approach because it tends to outperform the other techniques and can also be implemented in an almost automatic way. The Cherkassky-Ma estimate is a useful approach for minimizing the mean absolute error and saving computational time related to the hyper-parameter search. The ANN and RF are also good options to outperform multiple linear regression (MLR). Finally, the use of SLR for predictor selection can dramatically reduce computational time and often help to enhance accuracy.

  3. Application of nonlinear-regression methods to a ground-water flow model of the Albuquerque Basin, New Mexico

    USGS Publications Warehouse

    Tiedeman, C.R.; Kernodle, J.M.; McAda, D.P.

    1998-01-01

    This report documents the application of nonlinear-regression methods to a numerical model of ground-water flow in the Albuquerque Basin, New Mexico. In the Albuquerque Basin, ground water is the primary source for most water uses. Ground-water withdrawal has steadily increased since the 1940's, resulting in large declines in water levels in the Albuquerque area. A ground-water flow model was developed in 1994 and revised and updated in 1995 for the purpose of managing basin ground- water resources. In the work presented here, nonlinear-regression methods were applied to a modified version of the previous flow model. Goals of this work were to use regression methods to calibrate the model with each of six different configurations of the basin subsurface and to assess and compare optimal parameter estimates, model fit, and model error among the resulting calibrations. The Albuquerque Basin is one in a series of north trending structural basins within the Rio Grande Rift, a region of Cenozoic crustal extension. Mountains, uplifts, and fault zones bound the basin, and rock units within the basin include pre-Santa Fe Group deposits, Tertiary Santa Fe Group basin fill, and post-Santa Fe Group volcanics and sediments. The Santa Fe Group is greater than 14,000 feet (ft) thick in the central part of the basin. During deposition of the Santa Fe Group, crustal extension resulted in development of north trending normal faults with vertical displacements of as much as 30,000 ft. Ground-water flow in the Albuquerque Basin occurs primarily in the Santa Fe Group and post-Santa Fe Group deposits. Water flows between the ground-water system and surface-water bodies in the inner valley of the basin, where the Rio Grande, a network of interconnected canals and drains, and Cochiti Reservoir are located. Recharge to the ground-water flow system occurs as infiltration of precipitation along mountain fronts and infiltration of stream water along tributaries to the Rio Grande; subsurface

  4. Creating a non-linear total sediment load formula using polynomial best subset regression model

    NASA Astrophysics Data System (ADS)

    Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali

    2016-08-01

    The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.

  5. Bayesian regression analysis of data with random effects covariates from nonlinear longitudinal measurements

    PubMed Central

    De la Cruz, Rolando; Meza, Cristian; Arribas-Gil, Ana; Carroll, Raymond J.

    2016-01-01

    Joint models for a wide class of response variables and longitudinal measurements consist on a mixed-effects model to fit longitudinal trajectories whose random effects enter as covariates in a generalized linear model for the primary response. They provide a useful way to assess association between these two kinds of data, which in clinical studies are often collected jointly on a series of individuals and may help understanding, for instance, the mechanisms of recovery of a certain disease or the efficacy of a given therapy. When a nonlinear mixed-effects model is used to fit the longitudinal trajectories, the existing estimation strategies based on likelihood approximations have been shown to exhibit some computational efficiency problems (De la Cruz et al., 2011). In this article we consider a Bayesian estimation procedure for the joint model with a nonlinear mixed-effects model for the longitudinal data and a generalized linear model for the primary response. The proposed prior structure allows for the implementation of an MCMC sampler. Moreover, we consider that the errors in the longitudinal model may be correlated. We apply our method to the analysis of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. We also conduct a simulation study to assess the importance of modelling correlated errors and quantify the consequences of model misspecification. PMID:27274601

  6. Non-linear Regression and Machine Learning for Streamflow Prediction and Climate Change Impact Analysis

    NASA Astrophysics Data System (ADS)

    Shortridge, J.; Guikema, S.; Zaitchik, B. F.

    2015-12-01

    In the past decade, machine-learning methods for empirical rainfall-runoff modeling have seen extensive development. However, the majority of research has focused on a small number of methods, such as artificial neural networks, while not considering other approaches for non-parametric regression that have been developed in recent years. These methods may be able to achieve comparable predictive accuracy to ANN's and more easily provide physical insights into the system of interest through evaluation of covariate influence. Additionally, these methods could provide a straightforward, computationally efficient way of evaluating climate change impacts in basins where data to support physical hydrologic models is limited. In this paper, we use multiple regression and machine-learning approaches to predict monthly streamflow in five highly-seasonal rivers in the highlands of Ethiopia. We find that generalized additive models, random forests, and cubist models achieve better predictive accuracy than ANNs in many basins assessed and are also able to outperform physical models developed for the same region. We discuss some challenges that could hinder the use of such models for climate impact assessment, such as biases resulting from model formulation and prediction under extreme climate conditions, and suggest methods for preventing and addressing these challenges. Finally, we demonstrate how predictor variable influence can be assessed to provide insights into the physical functioning of data-sparse watersheds.

  7. Application of nonlinear regression in the development of a wide range formulation for HCFC-22

    NASA Astrophysics Data System (ADS)

    Kamei, A.; Beyerlein, S. W.; Jacobsen, R. T.

    1995-09-01

    An equation of state has been developed for HCFC-22 for temperatures from the triple point (115.73 K) to 550 K, at pressures up to 60 MPa. Based on comparisons between experimental data and calculated properties, the accuracy of the wide-range equation of state is ±0.1% in density, ±0.3% in speed of sound, and ±1.0% in isobaric heat capacity, except in the critical region. Nonlinear fitting techniques were used to fit a liquid equation of state based on P-ρ-T, speed of sound, and isobaric heat capacity data. Properties calculated from the liquid equation of state were then used to expand the range of validity of the wide range equation of state for HCFC-22.

  8. An Efficient Nonlinear Regression Approach for Genome-wide Detection of Marginal and Interacting Genetic Variations.

    PubMed

    Lee, Seunghak; Lozano, Aurélie; Kambadur, Prabhanjan; Xing, Eric P

    2016-05-01

    Genome-wide association studies have revealed individual genetic variants associated with phenotypic traits such as disease risk and gene expressions. However, detecting pairwise interaction effects of genetic variants on traits still remains a challenge due to a large number of combinations of variants (∼10(11) SNP pairs in the human genome), and relatively small sample sizes (typically <10(4)). Despite recent breakthroughs in detecting interaction effects, there are still several open problems, including: (1) how to quickly process a large number of SNP pairs, (2) how to distinguish between true signals and SNPs/SNP pairs merely correlated with true signals, (3) how to detect nonlinear associations between SNP pairs and traits given small sample sizes, and (4) how to control false positives. In this article, we present a unified framework, called SPHINX, which addresses the aforementioned challenges. We first propose a piecewise linear model for interaction detection, because it is simple enough to estimate model parameters given small sample sizes but complex enough to capture nonlinear interaction effects. Then, based on the piecewise linear model, we introduce randomized group lasso under stability selection, and a screening algorithm to address the statistical and computational challenges mentioned above. In our experiments, we first demonstrate that SPHINX achieves better power than existing methods for interaction detection under false positive control. We further applied SPHINX to late-onset Alzheimer's disease dataset, and report 16 SNPs and 17 SNP pairs associated with gene traits. We also present a highly scalable implementation of our screening algorithm, which can screen ∼118 billion candidates of associations on a 60-node cluster in <5.5 hours. PMID:27159633

  9. Determination of the pKa of ionizable enzyme groups by nonlinear regression using a second degree equation.

    PubMed

    O'Reilly, S; Riveros, M C

    1994-01-01

    A second degree equation fitted by nonlinear regression for the analysis of the pH effect on enzyme activity is proposed for diprotic enzyme systems. This method allows the calculation of two molecular dissociation constants (KE1 and KE2 for the free enzyme, KES1 and KES2 for the ES complex) and the pH independent parameters (Vmax and Vmax/Km). The method is validated by bibliographic (alpha-chymotrypsin) and experimental data (almond beta-D-glucosidase). No significant differences were found between present data and those previously reported in the literature using similar experimental conditions. This method works using comparatively few [H+] concentration values within a narrow pH range, preferentially around the optimum, being adequate for diprotic systems with close pKa values. PMID:8728828

  10. Determination of the pKa of ionizable enzyme groups by nonlinear regression using a second degree equation.

    PubMed

    O'Reilly, S; Riveros, M C

    1994-01-01

    A second degree equation fitted by nonlinear regression for the analysis of the pH effect on enzyme activity is proposed for diprotic enzyme systems. This method allows the calculation of two molecular dissociation constants (KE1 and KE2 for the free enzyme, KES1 and KES2 for the ES complex) and the pH independent parameters (Vmax and Vmax/Km). The method is validated by bibliographic (alpha-chymotrypsin) and experimental data (almond beta-D-glucosidase). No significant differences were found between present data and those previously reported in the literature using similar experimental conditions. This method works using comparatively few [H+] concentration values within a narrow pH range, preferentially around the optimum, being adequate for diprotic systems with close pKa values.

  11. Determination of Constitutive Equation for Thermo-mechanical Processing of INCONEL 718 Through Double Multivariate Nonlinear Regression Analysis

    NASA Astrophysics Data System (ADS)

    Hussain, Mirza Zahid; Li, Fuguo; Wang, Jing; Yuan, Zhanwei; Li, Pan; Wu, Tao

    2015-07-01

    The present study comprises the determination of constitutive relationship for thermo-mechanical processing of INCONEL 718 through double multivariate nonlinear regression, a newly developed approach which not only considers the effect of strain, strain rate, and temperature on flow stress but also explains the interaction effect of these thermo-mechanical parameters on flow behavior of the alloy. Hot isothermal compression experiments were performed on Gleeble-3500 thermo-mechanical testing machine in the temperature range of 1153 to 1333 K within the strain rate range of 0.001 to 10 s-1. The deformation behavior of INCONEL 718 is analyzed and summarized by establishing the high temperature deformation constitutive equation. The calculated correlation coefficient ( R) and average absolute relative error ( AARE) underline the precision of proposed constitutive model.

  12. Deconvolution of antibody affinities and concentrations by non-linear regression analysis of competitive ELISA data.

    SciTech Connect

    Stevens, F. J.; Bobrovnik, S. A.; Biosciences Division; Palladin Inst. Biochemistry

    2007-12-01

    Physiological responses of the adaptive immune system are polyclonal in nature whether induced by a naturally occurring infection, by vaccination to prevent infection or, in the case of animals, by challenge with antigen to generate reagents of research or commercial significance. The composition of the polyclonal responses is distinct to each individual or animal and changes over time. Differences exist in the affinities of the constituents and their relative proportion of the responsive population. In addition, some of the antibodies bind to different sites on the antigen, whereas other pairs of antibodies are sterically restricted from concurrent interaction with the antigen. Even if generation of a monoclonal antibody is the ultimate goal of a project, the quality of the resulting reagent is ultimately related to the characteristics of the initial immune response. It is probably impossible to quantitatively parse the composition of a polyclonal response to antigen. However, molecular regression allows further parameterization of a polyclonal antiserum in the context of certain simplifying assumptions. The antiserum is described as consisting of two competing populations of high- and low-affinity and unknown relative proportions. This simple model allows the quantitative determination of representative affinities and proportions. These parameters may be of use in evaluating responses to vaccines, to evaluating continuity of antibody production whether in vaccine recipients or animals used for the production of antisera, or in optimizing selection of donors for the production of monoclonal antibodies.

  13. Inference of nonlinear gene regulatory networks through optimized ensemble of support vector regression and dynamic Bayesian networks.

    PubMed

    Akutekwe, Arinze; Seker, Huseyin

    2015-08-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in systems biology. Most methods for modeling and inferring the dynamics of GRNs, such as those based on state space models, vector autoregressive models and G1DBN algorithm, assume linear dependencies among genes. However, this strong assumption does not make for true representation of time-course relationships across the genes, which are inherently nonlinear. Nonlinear modeling methods such as the S-systems and causal structure identification (CSI) have been proposed, but are known to be statistically inefficient and analytically intractable in high dimensions. To overcome these limitations, we propose an optimized ensemble approach based on support vector regression (SVR) and dynamic Bayesian networks (DBNs). The method called SVR-DBN, uses nonlinear kernels of the SVR to infer the temporal relationships among genes within the DBN framework. The two-stage ensemble is further improved by SVR parameter optimization using Particle Swarm Optimization. Results on eight insilico-generated datasets, and two real world datasets of Drosophila Melanogaster and Escherichia Coli, show that our method outperformed the G1DBN algorithm by a total average accuracy of 12%. We further applied our method to model the time-course relationships of ovarian carcinoma. From our results, four hub genes were discovered. Stratified analysis further showed that the expression levels Prostrate differentiation factor and BTG family member 2 genes, were significantly increased by the cisplatin and oxaliplatin platinum drugs; while expression levels of Polo-like kinase and Cyclin B1 genes, were both decreased by the platinum drugs. These hub genes might be potential biomarkers for ovarian carcinoma. PMID:26738192

  14. Inverse Tasks In The Tsunami Problem: Nonlinear Regression With Inaccurate Input Data

    NASA Astrophysics Data System (ADS)

    Lavrentiev, M.; Shchemel, A.; Simonov, K.

    A variant of modified training functional that allows considering inaccurate input data is suggested. A limiting case when a part of input data is completely undefined, and, therefore, a problem of reconstruction of hidden parameters should be solved, is also considered. Some numerical experiments are presented. It is assumed that a dependence of known output variables on known input ones should be found is the classic problem definition, which is widely used in the majority of neural nets algorithms. The quality of approximation is evaluated as a performance function. Often the error of the task is evaluated as squared distance between known input data and predicted data multiplied by weighed coefficients. These coefficients may be named "precision coefficients". When inputs are not known exactly, natural generalization of performance function is adding member that responsible for distance between known inputs and shifted inputs, which lessen model's error. It is desirable that the set of variable parameters is compact for training to be con- verging. In the above problem it is possible to choose variants of demands of a priori compactness, which allow meaningful interpretation in the smoothness of the model dependence. Two kinds of regularization was used, first limited squares of coefficients responsible for nonlinearity and second limited multiplication of the above coeffi- cients and linear coefficients. Asymptotic universality of neural net ability to approxi- mate various smooth functions with any accuracy by increase of the number of tunable parameters is often the base for selecting a type of neural net approximation. It is pos- sible to show that used neural net will approach to Fourier integral transform, which approximate abilities are known, with increasing of the number of tunable parameters. In the limiting case, when input data is set with zero precision, the problem of recon- struction of hidden parameters with observed output data appears. The

  15. A strategy for multivariate calibration based on modified single-index signal regression: Capturing explicit non-linearity and improving prediction accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Li, Qingbo; Zhang, Guangjun

    2013-11-01

    In this paper, a modified single-index signal regression (mSISR) method is proposed to construct a nonlinear and practical model with high-accuracy. The mSISR method defines the optimal penalty tuning parameter in P-spline signal regression (PSR) as initial tuning parameter and chooses the number of cycles based on minimizing root mean squared error of cross-validation (RMSECV). mSISR is superior to single-index signal regression (SISR) in terms of accuracy, computation time and convergency. And it can provide the character of the non-linearity between spectra and responses in a more precise manner than SISR. Two spectra data sets from basic research experiments, including plant chlorophyll nondestructive measurement and human blood glucose noninvasive measurement, are employed to illustrate the advantages of mSISR. The results indicate that the mSISR method (i) obtains the smooth and helpful regression coefficient vector, (ii) explicitly exhibits the type and amount of the non-linearity, (iii) can take advantage of nonlinear features of the signals to improve prediction performance and (iv) has distinct adaptability for the complex spectra model by comparing with other calibration methods. It is validated that mSISR is a promising nonlinear modeling strategy for multivariate calibration.

  16. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    NASA Astrophysics Data System (ADS)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  17. Non-Linear Wavelet Regression and Branch & Bound Optimization for the Full Identification of Bivariate Operator Fractional Brownian Motion

    NASA Astrophysics Data System (ADS)

    Frecon, Jordan; Didier, Gustavo; Pustelnik, Nelly; Abry, Patrice

    2016-08-01

    Self-similarity is widely considered the reference framework for modeling the scaling properties of real-world data. However, most theoretical studies and their practical use have remained univariate. Operator Fractional Brownian Motion (OfBm) was recently proposed as a multivariate model for self-similarity. Yet it has remained seldom used in applications because of serious issues that appear in the joint estimation of its numerous parameters. While the univariate fractional Brownian motion requires the estimation of two parameters only, its mere bivariate extension already involves 7 parameters which are very different in nature. The present contribution proposes a method for the full identification of bivariate OfBm (i.e., the joint estimation of all parameters) through an original formulation as a non-linear wavelet regression coupled with a custom-made Branch & Bound numerical scheme. The estimation performance (consistency and asymptotic normality) is mathematically established and numerically assessed by means of Monte Carlo experiments. The impact of the parameters defining OfBm on the estimation performance as well as the associated computational costs are also thoroughly investigated.

  18. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models

    PubMed Central

    2011-01-01

    Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC

  19. Curvilinear Relationships in Special Education Research: How Multiple Regression Analysis Can Be Used To Investigate Nonlinear Effects.

    ERIC Educational Resources Information Center

    Barringer, Mary S.

    Researchers are becoming increasingly aware of the advantages of using multiple regression as opposed to analysis of variance (ANOVA) or analysis of covariance (ANCOVA). Multiple regression is more versatile and does not force the researcher to throw away variance by categorizing intervally scaled data. Polynomial regression analysis offers the…

  20. Nonlinear-regression flow model of the Gulf Coast aquifer systems in the south-central United States

    USGS Publications Warehouse

    Kuiper, L.K.

    1994-01-01

    A multiple-regression methodology was used to help answer questions concerning model reliability, and to calibrate a time-dependent variable-density ground-water flow model of the gulf coast aquifer systems in the south-central United States. More than 40 regression models with 2 to 31 regressions parameters are used and detailed results are presented for 12 of the models. More than 3,000 values for grid-element volume-averaged head and hydraulic conductivity are used for the regression model observations. Calculated prediction interval half widths, though perhaps inaccurate due to a lack of normality of the residuals, are the smallest for models with only four regression parameters. In addition, the root-mean weighted residual decreases very little with an increase in the number of regression parameters. The various models showed considerable overlap between the prediction inter- vals for shallow head and hydraulic conductivity. Approximate 95-percent prediction interval half widths for volume-averaged freshwater head exceed 108 feet; for volume-averaged base 10 logarithm hydraulic conductivity, they exceed 0.89. All of the models are unreliable for the prediction of head and ground-water flow in the deeper parts of the aquifer systems, including the amount of flow coming from the underlying geopressured zone. Truncating the domain of solution of one model to exclude that part of the system having a ground-water density greater than 1.005 grams per cubic centimeter or to exclude that part of the systems below a depth of 3,000 feet, and setting the density to that of freshwater does not appreciably change the results for head and ground-water flow, except for locations close to the truncation surface.

  1. Four-parametric non-linear regression fit of isovolumic relaxation in isolated ejecting rat and guinea pig hearts.

    PubMed

    Langer, S F

    2000-02-01

    Left ventricular isovolumic pressure fall is characterized by the time constant tau obtained by fitting the exponential p(t) = p(infinity) + (p(0)-p(infinity))3exp(-t/tau) to pressure fall. It has been shown that tau, calculated from the first half of pressure fall, differs considerably from that found at late relaxation in normal and pathophysiological conditions. The present study aims at testing for such differences statistically and to quantify tau changes during relaxation. Two improvements of the common regression procedure are introduced for that purpose: the use of the four-parametric regression function, p(t) = p(infinity) + (p(0)-p(infinity))3exp[-t/(tau(0)+b(tau)t)], and an optimal data-dependent split of the isovolumic pressure fall interval. The residual regression errors of the methods are statistically compared in one-hundred isolated working rat and one-hundred guinea pig hearts, additionally including a logistic regression method. Regression error is significantly reduced by introducing that b(tau). b(tau) is negative in most cases, indicating accelerated relaxation during isovolumic pressure fall, but zero and positive b(tau) are occasionally seen. Optimal interval tripartition further improves the regression error in most cases. The statistically proved acceleration of the time constant during isovolumic relaxation justifies factor b(t) as a direct and continuous measure of differences between early and late relaxation. This difference between early and late isovolumic relaxation is probably caused by residually contracted myocardium at the beginning of pressure fall, and is therefore important to describe pathophysiological effects on relaxation phases.

  2. Application of a New Hybrid Model with Seasonal Auto-Regressive Integrated Moving Average (ARIMA) and Nonlinear Auto-Regressive Neural Network (NARNN) in Forecasting Incidence Cases of HFMD in Shenzhen, China

    PubMed Central

    Tan, Li; Jiang, Hongbo; Wang, Ying; Wei, Sheng; Nie, Shaofa

    2014-01-01

    Background Outbreaks of hand-foot-mouth disease (HFMD) have been reported for many times in Asia during the last decades. This emerging disease has drawn worldwide attention and vigilance. Nowadays, the prevention and control of HFMD has become an imperative issue in China. Early detection and response will be helpful before it happening, using modern information technology during the epidemic. Method In this paper, a hybrid model combining seasonal auto-regressive integrated moving average (ARIMA) model and nonlinear auto-regressive neural network (NARNN) is proposed to predict the expected incidence cases from December 2012 to May 2013, using the retrospective observations obtained from China Information System for Disease Control and Prevention from January 2008 to November 2012. Results The best-fitted hybrid model was combined with seasonal ARIMA and NARNN with 15 hidden units and 5 delays. The hybrid model makes the good forecasting performance and estimates the expected incidence cases from December 2012 to May 2013, which are respectively −965.03, −1879.58, 4138.26, 1858.17, 4061.86 and 6163.16 with an obviously increasing trend. Conclusion The model proposed in this paper can predict the incidence trend of HFMD effectively, which could be helpful to policy makers. The usefulness of expected cases of HFMD perform not only in detecting outbreaks or providing probability statements, but also in providing decision makers with a probable trend of the variability of future observations that contains both historical and recent information. PMID:24893000

  3. Support vector machine regression (SVR/LS-SVM)--an alternative to neural networks (ANN) for analytical chemistry? Comparison of nonlinear methods on near infrared (NIR) spectroscopy data.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-04-21

    In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects.

  4. Support vector machine regression (SVR/LS-SVM)--an alternative to neural networks (ANN) for analytical chemistry? Comparison of nonlinear methods on near infrared (NIR) spectroscopy data.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-04-21

    In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects. PMID:21350755

  5. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve.

  6. Retrieval of aerosol optical depth from surface solar radiation measurements using machine learning algorithms, non-linear regression and a radiative transfer-based look-up table

    NASA Astrophysics Data System (ADS)

    Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti

    2016-07-01

    In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during

  7. Fouling resistance prediction using artificial neural network nonlinear auto-regressive with exogenous input model based on operating conditions and fluid properties correlations

    NASA Astrophysics Data System (ADS)

    Biyanto, Totok R.

    2016-06-01

    Fouling in a heat exchanger in Crude Preheat Train (CPT) refinery is an unsolved problem that reduces the plant efficiency, increases fuel consumption and CO2 emission. The fouling resistance behavior is very complex. It is difficult to develop a model using first principle equation to predict the fouling resistance due to different operating conditions and different crude blends. In this paper, Artificial Neural Networks (ANN) MultiLayer Perceptron (MLP) with input structure using Nonlinear Auto-Regressive with eXogenous (NARX) is utilized to build the fouling resistance model in shell and tube heat exchanger (STHX). The input data of the model are flow rates and temperatures of the streams of the heat exchanger, physical properties of product and crude blend data. This model serves as a predicting tool to optimize operating conditions and preventive maintenance of STHX. The results show that the model can capture the complexity of fouling characteristics in heat exchanger due to thermodynamic conditions and variations in crude oil properties (blends). It was found that the Root Mean Square Error (RMSE) are suitable to capture the nonlinearity and complexity of the STHX fouling resistance during phases of training and validation.

  8. Non-linear partial least square regression increases the estimation accuracy of grass nitrogen and phosphorus using in situ hyperspectral and environmental data

    NASA Astrophysics Data System (ADS)

    Ramoelo, A.; Skidmore, A. K.; Cho, M. A.; Mathieu, R.; Heitkönig, I. M. A.; Dudeni-Tlhone, N.; Schlerf, M.; Prins, H. H. T.

    2013-08-01

    Grass nitrogen (N) and phosphorus (P) concentrations are direct indicators of rangeland quality and provide imperative information for sound management of wildlife and livestock. It is challenging to estimate grass N and P concentrations using remote sensing in the savanna ecosystems. These areas are diverse and heterogeneous in soil and plant moisture, soil nutrients, grazing pressures, and human activities. The objective of the study is to test the performance of non-linear partial least squares regression (PLSR) for predicting grass N and P concentrations through integrating in situ hyperspectral remote sensing and environmental variables (climatic, edaphic and topographic). Data were collected along a land use gradient in the greater Kruger National Park region. The data consisted of: (i) in situ-measured hyperspectral spectra, (ii) environmental variables and measured grass N and P concentrations. The hyperspectral variables included published starch, N and protein spectral absorption features, red edge position, narrow-band indices such as simple ratio (SR) and normalized difference vegetation index (NDVI). The results of the non-linear PLSR were compared to those of conventional linear PLSR. Using non-linear PLSR, integrating in situ hyperspectral and environmental variables yielded the highest grass N and P estimation accuracy (R2 = 0.81, root mean square error (RMSE) = 0.08, and R2 = 0.80, RMSE = 0.03, respectively) as compared to using remote sensing variables only, and conventional PLSR. The study demonstrates the importance of an integrated modeling approach for estimating grass quality which is a crucial effort towards effective management and planning of protected and communal savanna ecosystems.

  9. Comparison of multiple linear and nonlinear regression, autoregressive integrated moving average, artificial neural network, and wavelet artificial neural network methods for urban water demand forecasting in Montreal, Canada

    NASA Astrophysics Data System (ADS)

    Adamowski, Jan; Fung Chan, Hiu; Prasher, Shiv O.; Ozga-Zielinski, Bogdan; Sliusarieva, Anna

    2012-01-01

    Daily water demand forecasts are an important component of cost-effective and sustainable management and optimization of urban water supply systems. In this study, a method based on coupling discrete wavelet transforms (WA) and artificial neural networks (ANNs) for urban water demand forecasting applications is proposed and tested. Multiple linear regression (MLR), multiple nonlinear regression (MNLR), autoregressive integrated moving average (ARIMA), ANN and WA-ANN models for urban water demand forecasting at lead times of one day for the summer months (May to August) were developed, and their relative performance was compared using the coefficient of determination, root mean square error, relative root mean square error, and efficiency index. The key variables used to develop and validate the models were daily total precipitation, daily maximum temperature, and daily water demand data from 2001 to 2009 in the city of Montreal, Canada. The WA-ANN models were found to provide more accurate urban water demand forecasts than the MLR, MNLR, ARIMA, and ANN models. The results of this study indicate that coupled wavelet-neural network models are a potentially promising new method of urban water demand forecasting that merit further study.

  10. The covariate-adjusted frequency plot.

    PubMed

    Holling, Heinz; Böhning, Walailuck; Böhning, Dankmar; Formann, Anton K

    2016-04-01

    Count data arise in numerous fields of interest. Analysis of these data frequently require distributional assumptions. Although the graphical display of a fitted model is straightforward in the univariate scenario, this becomes more complex if covariate information needs to be included into the model. Stratification is one way to proceed, but has its limitations if the covariate has many levels or the number of covariates is large. The article suggests a marginal method which works even in the case that all possible covariate combinations are different (i.e. no covariate combination occurs more than once). For each covariate combination the fitted model value is computed and then summed over the entire data set. The technique is quite general and works with all count distributional models as well as with all forms of covariate modelling. The article provides illustrations of the method for various situations and also shows that the proposed estimator as well as the empirical count frequency are consistent with respect to the same parameter.

  11. Wild bootstrap for quantile regression.

    PubMed

    Feng, Xingdong; He, Xuming; Hu, Jianhua

    2011-12-01

    The existing theory of the wild bootstrap has focused on linear estimators. In this note, we broaden its validity by providing a class of weight distributions that is asymptotically valid for quantile regression estimators. As most weight distributions in the literature lead to biased variance estimates for nonlinear estimators of linear regression, we propose a modification of the wild bootstrap that admits a broader class of weight distributions for quantile regression. A simulation study on median regression is carried out to compare various bootstrap methods. With a simple finite-sample correction, the wild bootstrap is shown to account for general forms of heteroscedasticity in a regression model with fixed design points.

  12. Novel approaches to the calculation and comparison of thermoregulatory parameters: Non-linear regression of metabolic rate and evaporative water loss in Australian rodents.

    PubMed

    Tomlinson, Sean

    2016-04-01

    The calculation and comparison of physiological characteristics of thermoregulation has provided insight into patterns of ecology and evolution for over half a century. Thermoregulation has typically been explored using linear techniques; I explore the application of non-linear scaling to more accurately calculate and compare characteristics and thresholds of thermoregulation, including the basal metabolic rate (BMR), peak metabolic rate (PMR) and the lower (Tlc) and upper (Tuc) critical limits to the thermo-neutral zone (TNZ) for Australian rodents. An exponentially-modified logistic function accurately characterised the response of metabolic rate to ambient temperature, while evaporative water loss was accurately characterised by a Michaelis-Menten function. When these functions were used to resolve unique parameters for the nine species studied here, the estimates of BMR and TNZ were consistent with the previously published estimates. The approach resolved differences in rates of metabolism and water loss between subfamilies of Australian rodents that haven't been quantified before. I suggest that non-linear scaling is not only more effective than the established segmented linear techniques, but also is more objective. This approach may allow broader and more flexible comparison of characteristics of thermoregulation, but it needs testing with a broader array of taxa than those used here.

  13. Tutorial on a chemical model building by least-squares non-linear regression of multiwavelength spectrophotometric pH-titration data.

    PubMed

    Meloun, Milan; Bordovská, Sylva; Syrový, Tomás; Vrána, Ales

    2006-10-27

    Although the modern instrumentation enables for the increased amount of data to be delivered in shorter time, computer-assisted spectra analysis is limited by the intelligence and by the programmed logic tool applications. Proposed tutorial covers all the main steps of the data processing which involve the chemical model building, from calculating the concentration profiles and, using spectra regression, fitting the protonation constants of the chemical model to multiwavelength and multivariate data measured. Suggested diagnostics are examined to see whether the chemical model hypothesis can be accepted, as an incorrect model with false stoichiometric indices may lead to slow convergence, cyclization or divergence of the regression process minimization. Diagnostics concern the physical meaning of unknown parameters beta(qr) and epsilon(qr), physical sense of associated species concentrations, parametric correlation coefficients, goodness-of-fit tests, error analyses and spectra deconvolution, and the correct number of light-absorbing species determination. All of the benefits of spectrophotometric data analysis are demonstrated on the protonation constants of the ionizable anticancer drug 7-ethyl-10-hydroxycamptothecine, using data double checked with the SQUAD(84) and SPECFIT/32 regression programs and with factor analysis of the INDICES program. The experimental determination of protonation constants with their computational prediction based on a knowledge of chemical structures of the drug was through the combined MARVIN and PALLAS programs. If the proposed model adequately represents the data, the residuals should form a random pattern with a normal distribution N(0, s2), with the residual mean equal to zero, and the standard deviation of residuals being near to experimental noise. Examination of residual plots may be assisted by a graphical analysis of residuals, and systematic departures from randomness indicate that the model and parameter estimates are not

  14. Autistic Regression

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  15. Examining Non-Linear Associations between Accelerometer-Measured Physical Activity, Sedentary Behavior, and All-Cause Mortality Using Segmented Cox Regression.

    PubMed

    Lee, Paul H

    2016-01-01

    Healthy adults are advised to perform at least 150 min of moderate-intensity physical activity weekly, but this advice is based on studies using self-reports of questionable validity. This study examined the dose-response relationship of accelerometer-measured physical activity and sedentary behaviors on all-cause mortality using segmented Cox regression to empirically determine the break-points of the dose-response relationship. Data from 7006 adult participants aged 18 or above in the National Health and Nutrition Examination Survey waves 2003-2004 and 2005-2006 were included in the analysis and linked with death certificate data using a probabilistic matching approach in the National Death Index through December 31, 2011. Physical activity and sedentary behavior were measured using ActiGraph model 7164 accelerometer over the right hip for 7 consecutive days. Each minute with accelerometer count <100; 1952-5724; and ≥5725 were classified as sedentary, moderate-intensity physical activity, and vigorous-intensity physical activity, respectively. Segmented Cox regression was used to estimate the hazard ratio (HR) of time spent in sedentary behaviors, moderate-intensity physical activity, and vigorous-intensity physical activity and all-cause mortality, adjusted for demographic characteristics, health behaviors, and health conditions. Data were analyzed in 2016. During 47,119 person-year of follow-up, 608 deaths occurred. Each additional hour per day of sedentary behaviors was associated with a HR of 1.15 (95% CI 1.01, 1.31) among participants who spend at least 10.9 h per day on sedentary behaviors, and each additional minute per day spent on moderate-intensity physical activity was associated with a HR of 0.94 (95% CI 0.91, 0.96) among participants with daily moderate-intensity physical activity ≤14.1 min. Associations of moderate physical activity and sedentary behaviors on all-cause mortality were independent of each other. To conclude, evidence from this

  16. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  17. Resolving model parameter values from carbon and nitrogen stock measurements in a wide range of tropical mature forests using nonlinear inversion and regression trees

    USGS Publications Warehouse

    Liu, S.; Anderson, P.; Zhou, G.; Kauffman, B.; Hughes, F.; Schimel, D.; Watson, Vicente; Tosi, Joseph

    2008-01-01

    Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in seven life zones in Costa Rica. Net primary productivity from the Moderate-Resolution Imaging Spectroradiometer (MODIS), C and N stocks in aboveground live biomass, litter, coarse woody debris (CWD), and in soils were used to calibrate the model. To investigate the resolution of available observations on the number of adjustable parameters, inversion was performed using nine setups of adjustable parameters. Statistics including observation sensitivity, parameter correlation coefficient, parameter sensitivity, and parameter confidence limits were used to evaluate the information content of observations, resolution of model parameters, and overall model performance. Results indicated that soil organic carbon content, soil nitrogen content, and total aboveground biomass carbon had the highest information contents, while measurements of carbon in litter and nitrogen in CWD contributed little to the parameter estimation processes. The available information could resolve the values of 2-4 parameters. Adjusting just one parameter resulted in under-fitting and unacceptable model performance, while adjusting five parameters simultaneously led to over-fitting. Results further indicated that the MODIS NPP values were compressed as compared with the spatial variability of net primary production (NPP) values inferred from inverse modeling. Using inverse modeling to infer NPP and other sensitive model parameters from C and N stock observations provides an opportunity to utilize data collected by national to regional forest inventory systems to reduce the uncertainties in the carbon cycle and generate valuable

  18. Wild bootstrap for quantile regression.

    PubMed

    Feng, Xingdong; He, Xuming; Hu, Jianhua

    2011-12-01

    The existing theory of the wild bootstrap has focused on linear estimators. In this note, we broaden its validity by providing a class of weight distributions that is asymptotically valid for quantile regression estimators. As most weight distributions in the literature lead to biased variance estimates for nonlinear estimators of linear regression, we propose a modification of the wild bootstrap that admits a broader class of weight distributions for quantile regression. A simulation study on median regression is carried out to compare various bootstrap methods. With a simple finite-sample correction, the wild bootstrap is shown to account for general forms of heteroscedasticity in a regression model with fixed design points. PMID:23049133

  19. Generalized REGression Package for Nonlinear Parameter Estimation

    1995-05-15

    GREG computes modal (maximum-posterior-density) and interval estimates of the parameters in a user-provided Fortran subroutine MODEL, using a user-provided vector OBS of single-response observations or matrix OBS of multiresponse observations. GREG can also select the optimal next experiment from a menu of simulated candidates, so as to minimize the volume of the parametric inference region based on the resulting augmented data set.

  20. Deriving the Regression Equation without Using Calculus

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    2004-01-01

    Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…

  1. Regression: A Bibliography.

    ERIC Educational Resources Information Center

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  2. Regression Discontinuity Designs in Epidemiology

    PubMed Central

    Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till

    2014-01-01

    When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922

  3. Regression modeling of ground-water flow

    USGS Publications Warehouse

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  4. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  5. NCCS Regression Test Harness

    SciTech Connect

    Tharrington, Arnold N.

    2015-09-09

    The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.

  6. Fully Regressive Melanoma

    PubMed Central

    Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé

    2016-01-01

    Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis.

  7. Fully Regressive Melanoma

    PubMed Central

    Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé

    2016-01-01

    Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418

  8. Improved Regression Calibration

    ERIC Educational Resources Information Center

    Skrondal, Anders; Kuha, Jouni

    2012-01-01

    The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…

  9. Prediction in Multiple Regression.

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2000-01-01

    Presents the concept of prediction via multiple regression (MR) and discusses the assumptions underlying multiple regression analyses. Also discusses shrinkage, cross-validation, and double cross-validation of prediction equations and describes how to calculate confidence intervals around individual predictions. (SLD)

  10. Morse–Smale Regression

    SciTech Connect

    Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.

    2012-01-19

    This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.

  11. Regression problems for magnitudes

    NASA Astrophysics Data System (ADS)

    Castellaro, S.; Mulargia, F.; Kagan, Y. Y.

    2006-06-01

    Least-squares linear regression is so popular that it is sometimes applied without checking whether its basic requirements are satisfied. In particular, in studying earthquake phenomena, the conditions (a) that the uncertainty on the independent variable is at least one order of magnitude smaller than the one on the dependent variable, (b) that both data and uncertainties are normally distributed and (c) that residuals are constant are at times disregarded. This may easily lead to wrong results. As an alternative to least squares, when the ratio between errors on the independent and the dependent variable can be estimated, orthogonal regression can be applied. We test the performance of orthogonal regression in its general form against Gaussian and non-Gaussian data and error distributions and compare it with standard least-square regression. General orthogonal regression is found to be superior or equal to the standard least squares in all the cases investigated and its use is recommended. We also compare the performance of orthogonal regression versus standard regression when, as often happens in the literature, the ratio between errors on the independent and the dependent variables cannot be estimated and is arbitrarily set to 1. We apply these results to magnitude scale conversion, which is a common problem in seismology, with important implications in seismic hazard evaluation, and analyse it through specific tests. Our analysis concludes that the commonly used standard regression may induce systematic errors in magnitude conversion as high as 0.3-0.4, and, even more importantly, this can introduce apparent catalogue incompleteness, as well as a heavy bias in estimates of the slope of the frequency-magnitude distributions. All this can be avoided by using the general orthogonal regression in magnitude conversions.

  12. Multivariate Regression with Calibration*

    PubMed Central

    Liu, Han; Wang, Lie; Zhao, Tuo

    2014-01-01

    We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861

  13. Metamorphic geodesic regression.

    PubMed

    Hong, Yi; Joshi, Sarang; Sanchez, Mar; Styner, Martin; Niethammer, Marc

    2012-01-01

    We propose a metamorphic geodesic regression approach approximating spatial transformations for image time-series while simultaneously accounting for intensity changes. Such changes occur for example in magnetic resonance imaging (MRI) studies of the developing brain due to myelination. To simplify computations we propose an approximate metamorphic geodesic regression formulation that only requires pairwise computations of image metamorphoses. The approximated solution is an appropriately weighted average of initial momenta. To obtain initial momenta reliably, we develop a shooting method for image metamorphosis.

  14. Latent Regression Analysis.

    PubMed

    Tarpey, Thaddeus; Petkova, Eva

    2010-07-01

    Finite mixture models have come to play a very prominent role in modelling data. The finite mixture model is predicated on the assumption that distinct latent groups exist in the population. The finite mixture model therefore is based on a categorical latent variable that distinguishes the different groups. Often in practice distinct sub-populations do not actually exist. For example, disease severity (e.g. depression) may vary continuously and therefore, a distinction of diseased and not-diseased may not be based on the existence of distinct sub-populations. Thus, what is needed is a generalization of the finite mixture's discrete latent predictor to a continuous latent predictor. We cast the finite mixture model as a regression model with a latent Bernoulli predictor. A latent regression model is proposed by replacing the discrete Bernoulli predictor by a continuous latent predictor with a beta distribution. Motivation for the latent regression model arises from applications where distinct latent classes do not exist, but instead individuals vary according to a continuous latent variable. The shapes of the beta density are very flexible and can approximate the discrete Bernoulli distribution. Examples and a simulation are provided to illustrate the latent regression model. In particular, the latent regression model is used to model placebo effect among drug treated subjects in a depression study. PMID:20625443

  15. Semiparametric Regression Pursuit.

    PubMed

    Huang, Jian; Wei, Fengrong; Ma, Shuangge

    2012-10-01

    The semiparametric partially linear model allows flexible modeling of covariate effects on the response variable in regression. It combines the flexibility of nonparametric regression and parsimony of linear regression. The most important assumption in the existing methods for the estimation in this model is to assume a priori that it is known which covariates have a linear effect and which do not. However, in applied work, this is rarely known in advance. We consider the problem of estimation in the partially linear models without assuming a priori which covariates have linear effects. We propose a semiparametric regression pursuit method for identifying the covariates with a linear effect. Our proposed method is a penalized regression approach using a group minimax concave penalty. Under suitable conditions we show that the proposed approach is model-pursuit consistent, meaning that it can correctly determine which covariates have a linear effect and which do not with high probability. The performance of the proposed method is evaluated using simulation studies, which support our theoretical results. A real data example is used to illustrated the application of the proposed method. PMID:23559831

  16. [Understanding logistic regression].

    PubMed

    El Sanharawi, M; Naudet, F

    2013-10-01

    Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.

  17. Practical Session: Logistic Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.

  18. Explorations in Statistics: Regression

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2011-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…

  19. Modern Regression Discontinuity Analysis

    ERIC Educational Resources Information Center

    Bloom, Howard S.

    2012-01-01

    This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…

  20. Multiple linear regression analysis

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  1. Mechanisms of neuroblastoma regression

    PubMed Central

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  2. Ridge Regression Signal Processing

    NASA Technical Reports Server (NTRS)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  3. Orthogonal Regression: A Teaching Perspective

    ERIC Educational Resources Information Center

    Carr, James R.

    2012-01-01

    A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

  4. Structural regression trees

    SciTech Connect

    Kramer, S.

    1996-12-31

    In many real-world domains the task of machine learning algorithms is to learn a theory for predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with nondeterminate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems. SRT integrates the statistical method of regression trees into ILP. It constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy.

  5. Regression Segmentation for M³ Spinal Images.

    PubMed

    Wang, Zhijie; Zhen, Xiantong; Tay, KengYeow; Osman, Said; Romano, Walter; Li, Shuo

    2015-08-01

    Clinical routine often requires to analyze spinal images of multiple anatomic structures in multiple anatomic planes from multiple imaging modalities (M(3)). Unfortunately, existing methods for segmenting spinal images are still limited to one specific structure, in one specific plane or from one specific modality (S(3)). In this paper, we propose a novel approach, Regression Segmentation, that is for the first time able to segment M(3) spinal images in one single unified framework. This approach formulates the segmentation task innovatively as a boundary regression problem: modeling a highly nonlinear mapping function from substantially diverse M(3) images directly to desired object boundaries. Leveraging the advancement of sparse kernel machines, regression segmentation is fulfilled by a multi-dimensional support vector regressor (MSVR) which operates in an implicit, high dimensional feature space where M(3) diversity and specificity can be systematically categorized, extracted, and handled. The proposed regression segmentation approach was thoroughly tested on images from 113 clinical subjects including both disc and vertebral structures, in both sagittal and axial planes, and from both MRI and CT modalities. The overall result reaches a high dice similarity index (DSI) 0.912 and a low boundary distance (BD) 0.928 mm. With our unified and expendable framework, an efficient clinical tool for M(3) spinal image segmentation can be easily achieved, and will substantially benefit the diagnosis and treatment of spinal diseases.

  6. CSWS-related autistic regression versus autistic regression without CSWS.

    PubMed

    Tuchman, Roberto

    2009-08-01

    Continuous spike-waves during slow-wave sleep (CSWS) and Landau-Kleffner syndrome (LKS) are two clinical epileptic syndromes that are associated with the electroencephalography (EEG) pattern of electrical status epilepticus during slow wave sleep (ESES). Autistic regression occurs in approximately 30% of children with autism and is associated with an epileptiform EEG in approximately 20%. The behavioral phenotypes of CSWS, LKS, and autistic regression overlap. However, the differences in age of regression, degree and type of regression, and frequency of epilepsy and EEG abnormalities suggest that these are distinct phenotypes. CSWS with autistic regression is rare, as is autistic regression associated with ESES. The pathophysiology and as such the treatment implications for children with CSWS and autistic regression are distinct from those with autistic regression without CSWS.

  7. Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test

    PubMed Central

    Zhao, Ni; Chen, Jun; Carroll, Ian M.; Ringel-Kulka, Tamar; Epstein, Michael P.; Zhou, Hua; Zhou, Jin J.; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C.

    2015-01-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals’ microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. “Optimal” MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468

  8. Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test.

    PubMed

    Zhao, Ni; Chen, Jun; Carroll, Ian M; Ringel-Kulka, Tamar; Epstein, Michael P; Zhou, Hua; Zhou, Jin J; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C

    2015-05-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals' microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. "Optimal" MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468

  9. Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test.

    PubMed

    Zhao, Ni; Chen, Jun; Carroll, Ian M; Ringel-Kulka, Tamar; Epstein, Michael P; Zhou, Hua; Zhou, Jin J; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C

    2015-05-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals' microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. "Optimal" MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for.

  10. Logarithmic Transformations in Regression: Do You Transform Back Correctly?

    ERIC Educational Resources Information Center

    Dambolena, Ismael G.; Eriksen, Steven E.; Kopcso, David P.

    2009-01-01

    The logarithmic transformation is often used in regression analysis for a variety of purposes such as the linearization of a nonlinear relationship between two or more variables. We have noticed that when this transformation is applied to the response variable, the computation of the point estimate of the conditional mean of the original response…

  11. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  12. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  13. Nonlinear optics and nonlinear dynamics

    NASA Astrophysics Data System (ADS)

    Chen, C. H.

    1990-08-01

    The author was invited by the Institute of Atomic and Molecular Sciences, Academia Sinica, in Taiwan to give six lectures on nonlinear optics. The participants included graduate students, postdoctoral fellows, research staff, and professors from several research organizations and universities. Extensive discussion followed each lecture. Since both the Photophysics Group at Oak Ridge National Laboratory (ORNL) and Institute of Atomic and Molecular Sciences in Taiwan have been actively participating in nonlinear optics research, the discussions are very beneficial to ORNL programs. The author also visited several laboratories at IAMS to exchange research ideas on nonlinear optics.

  14. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  15. Quantile regression for climate data

    NASA Astrophysics Data System (ADS)

    Marasinghe, Dilhani Shalika

    Quantile regression is a developing statistical tool which is used to explain the relationship between response and predictor variables. This thesis describes two examples of climatology using quantile regression.Our main goal is to estimate derivatives of a conditional mean and/or conditional quantile function. We introduce a method to handle autocorrelation in the framework of quantile regression and used it with the temperature data. Also we explain some properties of the tornado data which is non-normally distributed. Even though quantile regression provides a more comprehensive view, when talking about residuals with the normality and the constant variance assumption, we would prefer least square regression for our temperature analysis. When dealing with the non-normality and non constant variance assumption, quantile regression is a better candidate for the estimation of the derivative.

  16. Nonparametric Covariate-Adjusted Association Tests Based on the Generalized Kendall’s Tau*

    PubMed Central

    Zhu, Wensheng; Jiang, Yuan; Zhang, Heping

    2012-01-01

    Identifying the risk factors for comorbidity is important in psychiatric research. Empirically, studies have shown that testing multiple, correlated traits simultaneously is more powerful than testing a single trait at a time in association analysis. Furthermore, for complex diseases, especially mental illnesses and behavioral disorders, the traits are often recorded in different scales such as dichotomous, ordinal and quantitative. In the absence of covariates, nonparametric association tests have been developed for multiple complex traits to study comorbidity. However, genetic studies generally contain measurements of some covariates that may affect the relationship between the risk factors of major interest (such as genes) and the outcomes. While it is relatively easy to adjust these covariates in a parametric model for quantitative traits, it is challenging for multiple complex traits with possibly different scales. In this article, we propose a nonparametric test for multiple complex traits that can adjust for covariate effects. The test aims to achieve an optimal scheme of adjustment by using a maximum statistic calculated from multiple adjusted test statistics. We derive the asymptotic null distribution of the maximum test statistic, and also propose a resampling approach, both of which can be used to assess the significance of our test. Simulations are conducted to compare the type I error and power of the nonparametric adjusted test to the unadjusted test and other existing adjusted tests. The empirical results suggest that our proposed test increases the power through adjustment for covariates when there exist environmental effects, and is more robust to model misspecifications than some existing parametric adjusted tests. We further demonstrate the advantage of our test by analyzing a data set on genetics of alcoholism. PMID:22745516

  17. A covariate adjusted two-stage allocation design for binary responses in randomized clinical trials.

    PubMed

    Bandyopadhyay, Uttam; Biswas, Atanu; Bhattacharya, Rahul

    2007-10-30

    In the present work, we develop a two-stage allocation rule for binary response using the log-odds ratio within the Bayesian framework allowing the current allocation to depend on the covariate value of the current subject. We study, both numerically and theoretically, several exact and limiting properties of this design. The applicability of the proposed methodology is illustrated by using some data set. We compare this rule with some of the existing rules by computing various performance measures.

  18. Covariate adjustment of event histories estimated from Markov chains: the additive approach.

    PubMed

    Aalen, O O; Borgan, O; Fekjaer, H

    2001-12-01

    Markov chain models are frequently used for studying event histories that include transitions between several states. An empirical transition matrix for nonhomogeneous Markov chains has previously been developed, including a detailed statistical theory based on counting processes and martingales. In this article, we show how to estimate transition probabilities dependent on covariates. This technique may, e.g., be used for making estimates of individual prognosis in epidemiological or clinical studies. The covariates are included through nonparametric additive models on the transition intensities of the Markov chain. The additive model allows for estimation of covariate-dependent transition intensities, and again a detailed theory exists based on counting processes. The martingale setting now allows for a very natural combination of the empirical transition matrix and the additive model, resulting in estimates that can be expressed as stochastic integrals, and hence their properties are easily evaluated. Two medical examples will be given. In the first example, we study how the lung cancer mortality of uranium miners depends on smoking and radon exposure. In the second example, we study how the probability of being in response depends on patient group and prophylactic treatment for leukemia patients who have had a bone marrow transplantation. A program in R and S-PLUS that can carry out the analyses described here has been developed and is freely available on the Internet. PMID:11764270

  19. Effects of Participation in a Post-Secondary Honors Program with Covariate Adjustment Using Propensity Score

    ERIC Educational Resources Information Center

    Furtwengler, Scott R.

    2015-01-01

    The present study sought to determine the extent to which participation in a post-secondary honors program affected academic achievement. Archival data were collected on three cohorts of high-achieving students at a large public university. Propensity scores were calculated on factors predicting participation in honors and used as the covariate.…

  20. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  1. Retro-regression--another important multivariate regression improvement.

    PubMed

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA. PMID:11410035

  2. Nonlinear channelizer.

    PubMed

    In, Visarath; Longhini, Patrick; Kho, Andy; Neff, Joseph D; Leung, Daniel; Liu, Norman; Meadows, Brian K; Gordon, Frank; Bulsara, Adi R; Palacios, Antonio

    2012-12-01

    The nonlinear channelizer is an integrated circuit made up of large parallel arrays of analog nonlinear oscillators, which, collectively, serve as a broad-spectrum analyzer with the ability to receive complex signals containing multiple frequencies and instantaneously lock-on or respond to a received signal in a few oscillation cycles. The concept is based on the generation of internal oscillations in coupled nonlinear systems that do not normally oscillate in the absence of coupling. In particular, the system consists of unidirectionally coupled bistable nonlinear elements, where the frequency and other dynamical characteristics of the emergent oscillations depend on the system's internal parameters and the received signal. These properties and characteristics are being employed to develop a system capable of locking onto any arbitrary input radio frequency signal. The system is efficient by eliminating the need for high-speed, high-accuracy analog-to-digital converters, and compact by making use of nonlinear coupled systems to act as a channelizer (frequency binning and channeling), a low noise amplifier, and a frequency down-converter in a single step which, in turn, will reduce the size, weight, power, and cost of the entire communication system. This paper covers the theory, numerical simulations, and some engineering details that validate the concept at the frequency band of 1-4 GHz.

  3. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  4. Can luteal regression be reversed?

    PubMed Central

    Telleria, Carlos M

    2006-01-01

    The corpus luteum is an endocrine gland whose limited lifespan is hormonally programmed. This debate article summarizes findings of our research group that challenge the principle that the end of function of the corpus luteum or luteal regression, once triggered, cannot be reversed. Overturning luteal regression by pharmacological manipulations may be of critical significance in designing strategies to improve fertility efficacy. PMID:17074090

  5. Logistic Regression: Concept and Application

    ERIC Educational Resources Information Center

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  6. Support vector machines for classification and regression.

    PubMed

    Brereton, Richard G; Lloyd, Gavin R

    2010-02-01

    The increasing interest in Support Vector Machines (SVMs) over the past 15 years is described. Methods are illustrated using simulated case studies, and 4 experimental case studies, namely mass spectrometry for studying pollution, near infrared analysis of food, thermal analysis of polymers and UV/visible spectroscopy of polyaromatic hydrocarbons. The basis of SVMs as two-class classifiers is shown with extensive visualisation, including learning machines, kernels and penalty functions. The influence of the penalty error and radial basis function radius on the model is illustrated. Multiclass implementations including one vs. all, one vs. one, fuzzy rules and Directed Acyclic Graph (DAG) trees are described. One-class Support Vector Domain Description (SVDD) is described and contrasted to conventional two- or multi-class classifiers. The use of Support Vector Regression (SVR) is illustrated including its application to multivariate calibration, and why it is useful when there are outliers and non-linearities.

  7. [Regression grading in gastrointestinal tumors].

    PubMed

    Tischoff, I; Tannapfel, A

    2012-02-01

    Preoperative neoadjuvant chemoradiation therapy is a well-established and essential part of the interdisciplinary treatment of gastrointestinal tumors. Neoadjuvant treatment leads to regressive changes in tumors. To evaluate the histological tumor response different scoring systems describing regressive changes are used and known as tumor regression grading. Tumor regression grading is usually based on the presence of residual vital tumor cells in proportion to the total tumor size. Currently, no nationally or internationally accepted grading systems exist. In general, common guidelines should be used in the pathohistological diagnostics of tumors after neoadjuvant therapy. In particularly, the standard tumor grading will be replaced by tumor regression grading. Furthermore, tumors after neoadjuvant treatment are marked with the prefix "y" in the TNM classification. PMID:22293790

  8. Fungible weights in logistic regression.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record

  9. Nonlinear Systems.

    ERIC Educational Resources Information Center

    Seider, Warren D.; Ungar, Lyle H.

    1987-01-01

    Describes a course in nonlinear mathematics courses offered at the University of Pennsylvania which provides an opportunity for students to examine the complex solution spaces that chemical engineers encounter. Topics include modeling many chemical processes, especially those involving reaction and diffusion, auto catalytic reactions, phase…

  10. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  11. Multiple Regression and Its Discontents

    ERIC Educational Resources Information Center

    Snell, Joel C.; Marsh, Mitchell

    2012-01-01

    Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.

  12. Regression methods for spatial data

    NASA Technical Reports Server (NTRS)

    Yakowitz, S. J.; Szidarovszky, F.

    1982-01-01

    The kriging approach, a parametric regression method used by hydrologists and mining engineers, among others also provides an error estimate the integral of the regression function. The kriging method is explored and some of its statistical characteristics are described. The Watson method and theory are extended so that the kriging features are displayed. Theoretical and computational comparisons of the kriging and Watson approaches are offered.

  13. Wrong Signs in Regression Coefficients

    NASA Technical Reports Server (NTRS)

    McGee, Holly

    1999-01-01

    When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.

  14. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  15. Interpretation of Standardized Regression Coefficients in Multiple Regression.

    ERIC Educational Resources Information Center

    Thayer, Jerome D.

    The extent to which standardized regression coefficients (beta values) can be used to determine the importance of a variable in an equation was explored. The beta value and the part correlation coefficient--also called the semi-partial correlation coefficient and reported in squared form as the incremental "r squared"--were compared for variables…

  16. Regressive Evolution in Astyanax Cavefish

    PubMed Central

    Jeffery, William R.

    2013-01-01

    A diverse group of animals, including members of most major phyla, have adapted to life in the perpetual darkness of caves. These animals are united by the convergence of two regressive phenotypes, loss of eyes and pigmentation. The mechanisms of regressive evolution are poorly understood. The teleost Astyanax mexicanus is of special significance in studies of regressive evolution in cave animals. This species includes an ancestral surface dwelling form and many con-specific cave-dwelling forms, some of which have evolved their recessive phenotypes independently. Recent advances in Astyanax development and genetics have provided new information about how eyes and pigment are lost during cavefish evolution; namely, they have revealed some of the molecular and cellular mechanisms involved in trait modification, the number and identity of the underlying genes and mutations, the molecular basis of parallel evolution, and the evolutionary forces driving adaptation to the cave environment. PMID:19640230

  17. Laplace regression with censored data.

    PubMed

    Bottai, Matteo; Zhang, Jiajia

    2010-08-01

    We consider a regression model where the error term is assumed to follow a type of asymmetric Laplace distribution. We explore its use in the estimation of conditional quantiles of a continuous outcome variable given a set of covariates in the presence of random censoring. Censoring may depend on covariates. Estimation of the regression coefficients is carried out by maximizing a non-differentiable likelihood function. In the scenarios considered in a simulation study, the Laplace estimator showed correct coverage and shorter computation time than the alternative methods considered, some of which occasionally failed to converge. We illustrate the use of Laplace regression with an application to survival time in patients with small cell lung cancer.

  18. Survival Data and Regression Models

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

  19. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  20. [Is regression of atherosclerosis possible?].

    PubMed

    Thomas, D; Richard, J L; Emmerich, J; Bruckert, E; Delahaye, F

    1992-10-01

    Experimental studies have shown the regression of atherosclerosis in animals given a cholesterol-rich diet and then given a normal diet or hypolipidemic therapy. Despite favourable results of clinical trials of primary prevention modifying the lipid profile, the concept of atherosclerosis regression in man remains very controversial. The methodological approach is difficult: this is based on angiographic data and requires strict standardisation of angiographic views and reliable quantitative techniques of analysis which are available with image processing. Several methodologically acceptable clinical coronary studies have shown not only stabilisation but also regression of atherosclerotic lesions with reductions of about 25% in total cholesterol levels and of about 40% in LDL cholesterol levels. These reductions were obtained either by drugs as in CLAS (Cholesterol Lowering Atherosclerosis Study), FATS (Familial Atherosclerosis Treatment Study) and SCOR (Specialized Center of Research Intervention Trial), by profound modifications in dietary habits as in the Lifestyle Heart Trial, or by surgery (ileo-caecal bypass) as in POSCH (Program On the Surgical Control of the Hyperlipidemias). On the other hand, trials with non-lipid lowering drugs such as the calcium antagonists (INTACT, MHIS) have not shown significant regression of existing atherosclerotic lesions but only a decrease on the number of new lesions. The clinical benefits of these regression studies are difficult to demonstrate given the limited period of observation, relatively small population numbers and the fact that in some cases the subjects were asymptomatic. The decrease in the number of cardiovascular events therefore seems relatively modest and concerns essentially subjects who were symptomatic initially. The clinical repercussion of studies of prevention involving a single lipid factor is probably partially due to the reduction in progression and anatomical regression of the atherosclerotic plaque

  1. Nonlinear analysis of pupillary dynamics.

    PubMed

    Onorati, Francesco; Mainardi, Luca Tommaso; Sirca, Fabiola; Russo, Vincenzo; Barbieri, Riccardo

    2016-02-01

    Pupil size reflects autonomic response to different environmental and behavioral stimuli, and its dynamics have been linked to other autonomic correlates such as cardiac and respiratory rhythms. The aim of this study is to assess the nonlinear characteristics of pupil size of 25 normal subjects who participated in a psychophysiological experimental protocol with four experimental conditions, namely “baseline”, “anger”, “joy”, and “sadness”. Nonlinear measures, such as sample entropy, correlation dimension, and largest Lyapunov exponent, were computed on reconstructed signals of spontaneous fluctuations of pupil dilation. Nonparametric statistical tests were performed on surrogate data to verify that the nonlinear measures are an intrinsic characteristic of the signals. We then developed and applied a piecewise linear regression model to detrended fluctuation analysis (DFA). Two joinpoints and three scaling intervals were identified: slope α0, at slow time scales, represents a persistent nonstationary long-range correlation, whereas α1 and α2, at middle and fast time scales, respectively, represent long-range power-law correlations, similarly to DFA applied to heart rate variability signals. Of the computed complexity measures, α0 showed statistically significant differences among experimental conditions (p<0.001). Our results suggest that (a) pupil size at constant light condition is characterized by nonlinear dynamics, (b) three well-defined and distinct long-memory processes exist at different time scales, and (c) autonomic stimulation is partially reflected in nonlinear dynamics. PMID:26351899

  2. Correlation Weights in Multiple Regression

    ERIC Educational Resources Information Center

    Waller, Niels G.; Jones, Jeff A.

    2010-01-01

    A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…

  3. Weighting Regressions by Propensity Scores

    ERIC Educational Resources Information Center

    Freedman, David A.; Berk, Richard A.

    2008-01-01

    Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…

  4. Multiple Regression: A Leisurely Primer.

    ERIC Educational Resources Information Center

    Daniel, Larry G.; Onwuegbuzie, Anthony J.

    Multiple regression is a useful statistical technique when the researcher is considering situations in which variables of interest are theorized to be multiply caused. It may also be useful in those situations in which the researchers is interested in studies of predictability of phenomena of interest. This paper provides an introduction to…

  5. Cactus: An Introduction to Regression

    ERIC Educational Resources Information Center

    Hyde, Hartley

    2008-01-01

    When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

  6. Ridge Regression for Interactive Models.

    ERIC Educational Resources Information Center

    Tate, Richard L.

    1988-01-01

    An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are favorable to…

  7. Quantile Regression with Censored Data

    ERIC Educational Resources Information Center

    Lin, Guixian

    2009-01-01

    The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

  8. Logistic regression: a brief primer.

    PubMed

    Stoltzfus, Jill C

    2011-10-01

    Regression techniques are versatile in their application to medical research because they can measure associations, predict outcomes, and control for confounding variable effects. As one such technique, logistic regression is an efficient and powerful way to analyze the effect of a group of independent variables on a binary outcome by quantifying each independent variable's unique contribution. Using components of linear regression reflected in the logit scale, logistic regression iteratively identifies the strongest linear combination of variables with the greatest probability of detecting the observed outcome. Important considerations when conducting logistic regression include selecting independent variables, ensuring that relevant assumptions are met, and choosing an appropriate model building strategy. For independent variable selection, one should be guided by such factors as accepted theory, previous empirical investigations, clinical considerations, and univariate statistical analyses, with acknowledgement of potential confounding variables that should be accounted for. Basic assumptions that must be met for logistic regression include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers. Additionally, there should be an adequate number of events per independent variable to avoid an overfit model, with commonly recommended minimum "rules of thumb" ranging from 10 to 20 events per covariate. Regarding model building strategies, the three general types are direct/standard, sequential/hierarchical, and stepwise/statistical, with each having a different emphasis and purpose. Before reaching definitive conclusions from the results of any of these methods, one should formally quantify the model's internal validity (i.e., replicability within the same data set) and external validity (i.e., generalizability beyond the current sample). The resulting logistic regression model

  9. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program

  10. A Second Generation Nonlinear Factor Analysis.

    ERIC Educational Resources Information Center

    Etezadi-Amoli, Jamshid; McDonald, Roderick P.

    1983-01-01

    Nonlinear common factor models with polynomial regression functions, including interaction terms, are fitted by simultaneously estimating the factor loadings and common factor scores, using maximum likelihood and least squares methods. A Monte Carlo study gives support to a conjecture about the form of the distribution of the likelihood ratio…

  11. Non-crossing weighted kernel quantile regression with right censored data.

    PubMed

    Bang, Sungwan; Eo, Soo-Heang; Cho, Yong Mee; Jhun, Myoungshic; Cho, HyungJun

    2016-01-01

    Regarding survival data analysis in regression modeling, multiple conditional quantiles are useful summary statistics to assess covariate effects on survival times. In this study, we consider an estimation problem of multiple nonlinear quantile functions with right censored survival data. To account for censoring in estimating a nonlinear quantile function, weighted kernel quantile regression (WKQR) has been developed by using the kernel trick and inverse-censoring-probability weights. However, the individually estimated quantile functions based on the WKQR often cross each other and consequently violate the basic properties of quantiles. To avoid this problem of quantile crossing, we propose the non-crossing weighted kernel quantile regression (NWKQR), which estimates multiple nonlinear conditional quantile functions simultaneously by enforcing the non-crossing constraints on kernel coefficients. The numerical results are presented to demonstrate the competitive performance of the proposed NWKQR over the WKQR.

  12. Quantile Regression With Measurement Error

    PubMed Central

    Wei, Ying; Carroll, Raymond J.

    2010-01-01

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802

  13. Precision and Recall for Regression

    NASA Astrophysics Data System (ADS)

    Torgo, Luis; Ribeiro, Rita

    Cost sensitive prediction is a key task in many real world applications. Most existing research in this area deals with classification problems. This paper addresses a related regression problem: the prediction of rare extreme values of a continuous variable. These values are often regarded as outliers and removed from posterior analysis. However, for many applications (e.g. in finance, meteorology, biology, etc.) these are the key values that we want to accurately predict. Any learning method obtains models by optimizing some preference criteria. In this paper we propose new evaluation criteria that are more adequate for these applications. We describe a generalization for regression of the concepts of precision and recall often used in classification. Using these new evaluation metrics we are able to focus the evaluation of predictive models on the cases that really matter for these applications. Our experiments indicate the advantages of the use of these new measures when comparing predictive models in the context of our target applications.

  14. Sliced Inverse Regression for Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Li-Sue

    1995-11-01

    In this thesis, general nonlinear models for time series data are considered. A basic form is x _{t} = f(beta_sp{1} {T}X_{t-1},beta_sp {2}{T}X_{t-1},... , beta_sp{k}{T}X_ {t-1},varepsilon_{t}), where x_{t} is an observed time series data, X_{t } is the first d time lag vector, (x _{t},x_{t-1},... ,x _{t-d-1}), f is an unknown function, beta_{i}'s are unknown vectors, varepsilon_{t }'s are independent distributed. Special cases include AR and TAR models. We investigate the feasibility applying SIR/PHD (Li 1990, 1991) (the sliced inverse regression and principal Hessian methods) in estimating beta _{i}'s. PCA (Principal component analysis) is brought in to check one critical condition for SIR/PHD. Through simulation and a study on 3 well -known data sets of Canadian lynx, U.S. unemployment rate and sunspot numbers, we demonstrate how SIR/PHD can effectively retrieve the interesting low-dimension structures for time series data.

  15. Deep Wavelet Scattering for Quantum Energy Regression

    NASA Astrophysics Data System (ADS)

    Hirn, Matthew

    Physical functionals are usually computed as solutions of variational problems or from solutions of partial differential equations, which may require huge computations for complex systems. Quantum chemistry calculations of ground state molecular energies is such an example. Indeed, if x is a quantum molecular state, then the ground state energy E0 (x) is the minimum eigenvalue solution of the time independent Schrödinger Equation, which is computationally intensive for large systems. Machine learning algorithms do not simulate the physical system but estimate solutions by interpolating values provided by a training set of known examples {(xi ,E0 (xi) } i <= n . However, precise interpolations may require a number of examples that is exponential in the system dimension, and are thus intractable. This curse of dimensionality may be circumvented by computing interpolations in smaller approximation spaces, which take advantage of physical invariants. Linear regressions of E0 over a dictionary Φ ={ϕk } k compute an approximation E 0 as: E 0 (x) =∑kwkϕk (x) , where the weights {wk } k are selected to minimize the error between E0 and E 0 on the training set. The key to such a regression approach then lies in the design of the dictionary Φ. It must be intricate enough to capture the essential variability of E0 (x) over the molecular states x of interest, while simple enough so that evaluation of Φ (x) is significantly less intensive than a direct quantum mechanical computation (or approximation) of E0 (x) . In this talk we present a novel dictionary Φ for the regression of quantum mechanical energies based on the scattering transform of an intermediate, approximate electron density representation ρx of the state x. The scattering transform has the architecture of a deep convolutional network, composed of an alternating sequence of linear filters and nonlinear maps. Whereas in many deep learning tasks the linear filters are learned from the training data, here

  16. New Nonlinear Multigrid Analysis

    NASA Technical Reports Server (NTRS)

    Xie, Dexuan

    1996-01-01

    The nonlinear multigrid is an efficient algorithm for solving the system of nonlinear equations arising from the numerical discretization of nonlinear elliptic boundary problems. In this paper, we present a new nonlinear multigrid analysis as an extension of the linear multigrid theory presented by Bramble. In particular, we prove the convergence of the nonlinear V-cycle method for a class of mildly nonlinear second order elliptic boundary value problems which do not have full elliptic regularity.

  17. [Nonlinear magnetohydrodynamics

    SciTech Connect

    Not Available

    1994-01-01

    Resistive MHD equilibrium, even for small resistivity, differs greatly from ideal equilibrium, as do the dynamical consequences of its instabilities. The requirement, imposed by Faraday`s law, that time independent magnetic fields imply curl-free electric fields, greatly restricts the electric fields allowed inside a finite-resistivity plasma. If there is no flow and the implications of the Ohm`s law are taken into account (and they need not be, for ideal equilibria), the electric field must equal the resistivity times the current density. The vanishing of the divergence of the current density then provides a partial differential equation which, together with boundary conditions, uniquely determines the scalar potential, the electric field, and the current density, for any given resistivity profile. The situation parallels closely that of driven shear flows in hydrodynamics, in that while dissipative steady states are somewhat more complex than ideal ones, there are vastly fewer of them to consider. Seen in this light, the vast majority of ideal MHD equilibria are just irrelevant, incapable of being set up in the first place. The steady state whose stability thresholds and nonlinear behavior needs to be investigated ceases to be an arbitrary ad hoc exercise dependent upon the whim of the investigator, but is determined by boundary conditions and choice of resistivity profile.

  18. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  19. Regression analysis of cytopathological data

    SciTech Connect

    Whittemore, A.S.; McLarty, J.W.; Fortson, N.; Anderson, K.

    1982-12-01

    Epithelial cells from the human body are frequently labelled according to one of several ordered levels of abnormality, ranging from normal to malignant. The label of the most abnormal cell in a specimen determines the score for the specimen. This paper presents a model for the regression of specimen scores against continuous and discrete variables, as in host exposure to carcinogens. Application to data and tests for adequacy of model fit are illustrated using sputum specimens obtained from a cohort of former asbestos workers.

  20. Multiatlas Segmentation as Nonparametric Regression

    PubMed Central

    Awate, Suyash P.; Whitaker, Ross T.

    2015-01-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator’s convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528

  1. Variable Selection in ROC Regression

    PubMed Central

    2013-01-01

    Regression models are introduced into the receiver operating characteristic (ROC) analysis to accommodate effects of covariates, such as genes. If many covariates are available, the variable selection issue arises. The traditional induced methodology separately models outcomes of diseased and nondiseased groups; thus, separate application of variable selections to two models will bring barriers in interpretation, due to differences in selected models. Furthermore, in the ROC regression, the accuracy of area under the curve (AUC) should be the focus instead of aiming at the consistency of model selection or the good prediction performance. In this paper, we obtain one single objective function with the group SCAD to select grouped variables, which adapts to popular criteria of model selection, and propose a two-stage framework to apply the focused information criterion (FIC). Some asymptotic properties of the proposed methods are derived. Simulation studies show that the grouped variable selection is superior to separate model selections. Furthermore, the FIC improves the accuracy of the estimated AUC compared with other criteria. PMID:24312135

  2. Adaptive support vector regression for UAV flight control.

    PubMed

    Shin, Jongho; Jin Kim, H; Kim, Youdan

    2011-01-01

    This paper explores an application of support vector regression for adaptive control of an unmanned aerial vehicle (UAV). Unlike neural networks, support vector regression (SVR) generates global solutions, because SVR basically solves quadratic programming (QP) problems. With this advantage, the input-output feedback-linearized inverse dynamic model and the compensation term for the inversion error are identified off-line, which we call I-SVR (inversion SVR) and C-SVR (compensation SVR), respectively. In order to compensate for the inversion error and the unexpected uncertainty, an online adaptation algorithm for the C-SVR is proposed. Then, the stability of the overall error dynamics is analyzed by the uniformly ultimately bounded property in the nonlinear system theory. In order to validate the effectiveness of the proposed adaptive controller, numerical simulations are performed on the UAV model.

  3. Robust and efficient estimation with weighted composite quantile regression

    NASA Astrophysics Data System (ADS)

    Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng

    2016-09-01

    In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.

  4. Practical Session: Multiple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).

  5. Determination of airplane model structure from flight data by using modified stepwise regression

    NASA Technical Reports Server (NTRS)

    Klein, V.; Batterson, J. G.; Murphy, P. C.

    1981-01-01

    The linear and stepwise regressions are briefly introduced, then the problem of determining airplane model structure is addressed. The MSR was constructed to force a linear model for the aerodynamic coefficient first, then add significant nonlinear terms and delete nonsignificant terms from the model. In addition to the statistical criteria in the stepwise regression, the prediction sum of squares (PRESS) criterion and the analysis of residuals were examined for the selection of an adequate model. The procedure is used in examples with simulated and real flight data. It is shown that the MSR performs better than the ordinary stepwise regression and that the technique can also be applied to the large amplitude maneuvers.

  6. Getting Straight: Everything You Always Wanted to Know about the Title I Regression Model and Curvilinearity.

    ERIC Educational Resources Information Center

    Echternacht, Gary; Swinton, Spencer

    Title I evaluations using the RMC Model C design depend for their interpretation on the assumption that the regression of posttest on pretest is linear across the cut score level when there is no treatment; but there are many instances where nonlinearities may occur. If one applies the analysis of covariance, or model C analysis, large errors may…

  7. Nonlinear Acoustics in Fluids

    NASA Astrophysics Data System (ADS)

    Lauterborn, Werner; Kurz, Thomas; Akhatov, Iskander

    At high sound intensities or long propagation distances at in fluids sufficiently low damping acoustic phenomena become nonlinear. This chapter focuses on nonlinear acoustic wave properties in gases and liquids. The origin of nonlinearity, equations of state, simple nonlinear waves, nonlinear acoustic wave equations, shock-wave formation, and interaction of waves are presented and discussed. Tables are given for the nonlinearity parameter B/A for water and a range of organic liquids, liquid metals and gases. Acoustic cavitation with its nonlinear bubble oscillations, pattern formation and sonoluminescence (light from sound) are modern examples of nonlinear acoustics. The language of nonlinear dynamics needed for understanding chaotic dynamics and acoustic chaotic systems is introduced.

  8. Semiparametric regression during 2003–2007*

    PubMed Central

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2010-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800

  9. Modeling maximum daily temperature using a varying coefficient regression model

    NASA Astrophysics Data System (ADS)

    Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.

    2014-04-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.

  10. Developmental Regression in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Rogers, Sally J.

    2004-01-01

    The occurrence of developmental regression in autism is one of the more puzzling features of this disorder. Although several studies have documented the validity of parental reports of regression using home videos, accumulating data suggest that most children who demonstrate regression also demonstrated previous, subtle, developmental differences.…

  11. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  12. Regression Analysis by Example. 5th Edition

    ERIC Educational Resources Information Center

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  13. Bayesian Unimodal Density Regression for Causal Inference

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2011-01-01

    Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

  14. Standards for Standardized Logistic Regression Coefficients

    ERIC Educational Resources Information Center

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  15. Streamflow forecasting using functional regression

    NASA Astrophysics Data System (ADS)

    Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.

    2016-07-01

    Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.

  16. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  17. Insulin resistance: regression and clustering.

    PubMed

    Yoon, Sangho; Assimes, Themistocles L; Quertermous, Thomas; Hsiao, Chin-Fu; Chuang, Lee-Ming; Hwu, Chii-Min; Rajaratnam, Bala; Olshen, Richard A

    2014-01-01

    In this paper we try to define insulin resistance (IR) precisely for a group of Chinese women. Our definition deliberately does not depend upon body mass index (BMI) or age, although in other studies, with particular random effects models quite different from models used here, BMI accounts for a large part of the variability in IR. We accomplish our goal through application of Gauss mixture vector quantization (GMVQ), a technique for clustering that was developed for application to lossy data compression. Defining data come from measurements that play major roles in medical practice. A precise statement of what the data are is in Section 1. Their family structures are described in detail. They concern levels of lipids and the results of an oral glucose tolerance test (OGTT). We apply GMVQ to residuals obtained from regressions of outcomes of an OGTT and lipids on functions of age and BMI that are inferred from the data. A bootstrap procedure developed for our family data supplemented by insights from other approaches leads us to believe that two clusters are appropriate for defining IR precisely. One cluster consists of women who are IR, and the other of women who seem not to be. Genes and other features are used to predict cluster membership. We argue that prediction with "main effects" is not satisfactory, but prediction that includes interactions may be. PMID:24887437

  18. Time series regression model for infectious disease and weather.

    PubMed

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context.

  19. Fully Regressive Melanoma: A Case Without Metastasis.

    PubMed

    Ehrsam, Eric; Kallini, Joseph R; Lebas, Damien; Khachemoune, Amor; Modiano, Philippe; Cotten, Hervé

    2016-08-01

    Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418

  20. Developmental regression in autism spectrum disorder

    PubMed Central

    Al Backer, Nouf Backer

    2015-01-01

    The occurrence of developmental regression in autism spectrum disorder (ASD) is one of the most puzzling phenomena of this disorder. A little is known about the nature and mechanism of developmental regression in ASD. About one-third of young children with ASD lose some skills during the preschool period, usually speech, but sometimes also nonverbal communication, social or play skills are also affected. There is a lot of evidence suggesting that most children who demonstrate regression also had previous, subtle, developmental differences. It is difficult to predict the prognosis of autistic children with developmental regression. It seems that the earlier development of social, language, and attachment behaviors followed by regression does not predict the later recovery of skills or better developmental outcomes. The underlying mechanisms that lead to regression in autism are unknown. The role of subclinical epilepsy in the developmental regression of children with autism remains unclear. PMID:27493417

  1. Nonlinear Hysteretic Torsional Waves

    NASA Astrophysics Data System (ADS)

    Cabaret, J.; Béquin, P.; Theocharis, G.; Andreev, V.; Gusev, V. E.; Tournat, V.

    2015-07-01

    We theoretically study and experimentally report the propagation of nonlinear hysteretic torsional pulses in a vertical granular chain made of cm-scale, self-hanged magnetic beads. As predicted by contact mechanics, the torsional coupling between two beads is found to be nonlinear hysteretic. This results in a nonlinear pulse distortion essentially different from the distortion predicted by classical nonlinearities and in a complex dynamic response depending on the history of the wave particle angular velocity. Both are consistent with the predictions of purely hysteretic nonlinear elasticity and the Preisach-Mayergoyz hysteresis model, providing the opportunity to study the phenomenon of nonlinear dynamic hysteresis in the absence of other types of material nonlinearities. The proposed configuration reveals a plethora of interesting phenomena including giant amplitude-dependent attenuation, short-term memory, as well as dispersive properties. Thus, it could find interesting applications in nonlinear wave control devices such as strong amplitude-dependent filters.

  2. A nonlinear oscillator

    SciTech Connect

    Tomlin, R.

    1990-01-27

    A nonlinear oscillator design was imported from Cornell modified, and built for the purpose of simulating the chaotic states of a forced pendulum. Similar circuits have been investigated in the recent nonlinear explosion.

  3. LRGS: Linear Regression by Gibbs Sampling

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2016-02-01

    LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.

  4. Quantile regression applied to spectral distance decay

    USGS Publications Warehouse

    Rocchini, D.; Cade, B.S.

    2008-01-01

    Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.

  5. Regression Calibration with Heteroscedastic Error Variance

    PubMed Central

    Spiegelman, Donna; Logan, Roger; Grove, Douglas

    2011-01-01

    The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187

  6. Process modeling with the regression network.

    PubMed

    van der Walt, T; Barnard, E; van Deventer, J

    1995-01-01

    A new connectionist network topology called the regression network is proposed. The structural and underlying mathematical features of the regression network are investigated. Emphasis is placed on the intricacies of the optimization process for the regression network and some measures to alleviate these difficulties of optimization are proposed and investigated. The ability of the regression network algorithm to perform either nonparametric or parametric optimization, as well as a combination of both, is also highlighted. It is further shown how the regression network can be used to model systems which are poorly understood on the basis of sparse data. A semi-empirical regression network model is developed for a metallurgical processing operation (a hydrocyclone classifier) by building mechanistic knowledge into the connectionist structure of the regression network model. Poorly understood aspects of the process are provided for by use of nonparametric regions within the structure of the semi-empirical connectionist model. The performance of the regression network model is compared to the corresponding generalization performance results obtained by some other nonparametric regression techniques.

  7. Hybrid fuzzy regression with trapezoidal fuzzy data

    NASA Astrophysics Data System (ADS)

    Razzaghnia, T.; Danesh, S.; Maleki, A.

    2011-12-01

    In this regard, this research deals with a method for hybrid fuzzy least-squares regression. The extension of symmetric triangular fuzzy coefficients to asymmetric trapezoidal fuzzy coefficients is considered as an effective measure for removing unnecessary fuzziness of the linear fuzzy model. First, trapezoidal fuzzy variable is applied to derive a bivariate regression model. In the following, normal equations are formulated to solve the four parts of hybrid regression coefficients. Also the model is extended to multiple regression analysis. Eventually, method is compared with Y-H.O. chang's model.

  8. [From clinical judgment to linear regression model.

    PubMed

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.

  9. Geodesic least squares regression on information manifolds

    SciTech Connect

    Verdoolaege, Geert

    2014-12-05

    We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.

  10. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    ERIC Educational Resources Information Center

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  11. Using GA-Ridge regression to select hydro-geological parameters influencing groundwater pollution vulnerability.

    PubMed

    Ahn, Jae Joon; Kim, Young Min; Yoo, Keunje; Park, Joonhong; Oh, Kyong Joo

    2012-11-01

    For groundwater conservation and management, it is important to accurately assess groundwater pollution vulnerability. This study proposed an integrated model using ridge regression and a genetic algorithm (GA) to effectively select the major hydro-geological parameters influencing groundwater pollution vulnerability in an aquifer. The GA-Ridge regression method determined that depth to water, net recharge, topography, and the impact of vadose zone media were the hydro-geological parameters that influenced trichloroethene pollution vulnerability in a Korean aquifer. When using these selected hydro-geological parameters, the accuracy was improved for various statistical nonlinear and artificial intelligence (AI) techniques, such as multinomial logistic regression, decision trees, artificial neural networks, and case-based reasoning. These results provide a proof of concept that the GA-Ridge regression is effective at determining influential hydro-geological parameters for the pollution vulnerability of an aquifer, and in turn, improves the AI performance in assessing groundwater pollution vulnerability.

  12. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  13. Suppression Situations in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

  14. A Practical Guide to Regression Discontinuity

    ERIC Educational Resources Information Center

    Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard

    2012-01-01

    Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…

  15. Dealing with Outliers: Robust, Resistant Regression

    ERIC Educational Resources Information Center

    Glasser, Leslie

    2007-01-01

    Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…

  16. Cross-Validation, Shrinkage, and Multiple Regression.

    ERIC Educational Resources Information Center

    Hynes, Kevin

    One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…

  17. Application and Interpretation of Hierarchical Multiple Regression.

    PubMed

    Jeong, Younhee; Jung, Mi Jung

    2016-01-01

    The authors reported the association between motivation and self-management behavior of individuals with chronic low back pain after adjusting control variables using hierarchical multiple regression (). This article describes details of the hierarchical regression applying the actual data used in the article by , including how to test assumptions, run the statistical tests, and report the results. PMID:27648796

  18. Regression Analysis: Legal Applications in Institutional Research

    ERIC Educational Resources Information Center

    Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.

    2008-01-01

    This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…

  19. A Simulation Investigation of Principal Component Regression.

    ERIC Educational Resources Information Center

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  20. Incremental Net Effects in Multiple Regression

    ERIC Educational Resources Information Center

    Lipovetsky, Stan; Conklin, Michael

    2005-01-01

    A regular problem in regression analysis is estimating the comparative importance of the predictors in the model. This work considers the 'net effects', or shares of the predictors in the coefficient of the multiple determination, which is a widely used characteristic of the quality of a regression model. Estimation of the net effects can be a…

  1. Illustration of Regression towards the Means

    ERIC Educational Resources Information Center

    Govindaraju, K.; Haslett, S. J.

    2008-01-01

    This article presents a procedure for generating a sequence of data sets which will yield exactly the same fitted simple linear regression equation y = a + bx. Unless rescaled, the generated data sets will have progressively smaller variability for the two variables, and the associated response and covariate will "regress" towards their…

  2. Regression Analysis and the Sociological Imagination

    ERIC Educational Resources Information Center

    De Maio, Fernando

    2014-01-01

    Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

  3. Three-Dimensional Modeling in Linear Regression.

    ERIC Educational Resources Information Center

    Herman, James D.

    Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…

  4. Symplectic geometry spectrum regression for prediction of noisy time series.

    PubMed

    Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie

    2016-05-01

    We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body). PMID:27300890

  5. Symplectic geometry spectrum regression for prediction of noisy time series

    NASA Astrophysics Data System (ADS)

    Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie

    2016-05-01

    We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body).

  6. Nodule Regression in Adults With Nodular Gastritis

    PubMed Central

    Kim, Ji Wan; Lee, Sun-Young; Kim, Jeong Hwan; Sung, In-Kyung; Park, Hyung Seok; Shim, Chan-Sup; Han, Hye Seung

    2015-01-01

    Background Nodular gastritis (NG) is associated with the presence of Helicobacter pylori infection, but there are controversies on nodule regression in adults. The aim of this study was to analyze the factors that are related to the nodule regression in adults diagnosed as NG. Methods Adult population who were diagnosed as NG with H. pylori infection during esophagogastroduodenoscopy (EGD) at our center were included. Changes in the size and location of the nodules, status of H. pylori infection, upper gastrointestinal (UGI) symptom, EGD and pathology findings were analyzed between the initial and follow-up tests. Results Of the 117 NG patients, 66.7% (12/18) of the eradicated NG patients showed nodule regression after H. pylori eradication, whereas 9.9% (9/99) of the non-eradicated NG patients showed spontaneous nodule regression without H. pylori eradication (P < 0.001). Nodule regression was more frequent in NG patients with antral nodule location (P = 0.010), small-sized nodules (P = 0.029), H. pylori eradication (P < 0.001), UGI symptom (P = 0.007), and a long-term follow-up period (P = 0.030). On the logistic regression analysis, nodule regression was inversely correlated with the persistent H. pylori infection on the follow-up test (odds ratio (OR): 0.020, 95% confidence interval (CI): 0.003 - 0.137, P < 0.001) and short-term follow-up period < 30.5 months (OR: 0.140, 95% CI: 0.028 - 0.700, P = 0.017). Conclusions In adults with NG, H. pylori eradication is the most significant factor associated with nodule regression. Long-term follow-up period is also correlated with nodule regression, but is less significant than H. pylori eradication. Our findings suggest that H. pylori eradication should be considered to promote nodule regression in NG patients with H. pylori infection.

  7. Diamond nonlinear photonics

    NASA Astrophysics Data System (ADS)

    Hausmann, B. J. M.; Bulu, I.; Venkataraman, V.; Deotare, P.; Lončar, M.

    2014-05-01

    Despite progress towards integrated diamond photonics, studies of optical nonlinearities in diamond have been limited to Raman scattering in bulk samples. Diamond nonlinear photonics, however, could enable efficient, in situ frequency conversion of single photons emitted by diamond's colour centres, as well as stable and high-power frequency microcombs operating at new wavelengths. Both of these applications depend crucially on efficient four-wave mixing processes enabled by diamond's third-order nonlinearity. Here, we have realized a diamond nonlinear photonics platform by demonstrating optical parametric oscillation via four-wave mixing using single-crystal ultrahigh-quality-factor (1 × 106) diamond ring resonators operating at telecom wavelengths. Threshold powers as low as 20 mW are measured, and up to 20 new wavelengths are generated from a single-frequency pump laser. We also report the first measurement of the nonlinear refractive index due to the third-order nonlinearity in diamond at telecom wavelengths.

  8. Nonlinear rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Day, W. B.

    1985-01-01

    The special nonlinearities of the Jeffcott equations in rotordynamics are examined. The immediate application of this analysis is directed toward understanding the excessive vibrations recorded in the LOX pump of the SSME during hot firing ground testing. Deadband, side force and rubbing are three possible sources of inducing nonlinearity in the Jeffcott equations. The present analysis initially reduces these problems to the same mathematical description. A special frequency, named the nonlinear natural frequency is defined and used to develop the solutions of the nonlinear Jeffcott equations as asympotic expansions. This nonlinear natural frequency which is the ratio of the cross-stiffness and the damping, plays a major role in determining response frequencies. Numerical solutions are included for comparison with the analysis. Also, nonlinear frequency-response tables are made for a typical range of values.

  9. Estimation of treatment effects based on possibly misspecified Cox regression.

    PubMed

    Hattori, Satoshi; Henmi, Masayuki

    2012-10-01

    In randomized clinical trials, a treatment effect on a time-to-event endpoint is often estimated by the Cox proportional hazards model. The maximum partial likelihood estimator does not make sense if the proportional hazard assumption is violated. Xu and O'Quigley (Biostatistics 1:423-439, 2000) proposed an estimating equation, which provides an interpretable estimator for the treatment effect under model misspecification. Namely it provides a consistent estimator for the log-hazard ratio among the treatment groups if the model is correctly specified, and it is interpreted as an average log-hazard ratio over time even if misspecified. However, the method requires the assumption that censoring is independent of treatment group, which is more restricted than that for the maximum partial likelihood estimator and is often violated in practice. In this paper, we propose an alternative estimating equation. Our method provides an estimator of the same property as that of Xu and O'Quigley under the usual assumption for the maximum partial likelihood estimation. We show that our estimator is consistent and asymptotically normal, and derive a consistent estimator of the asymptotic variance. If the proportional hazards assumption holds, the efficiency of the estimator can be improved by applying the covariate adjustment method based on the semiparametric theory proposed by Lu and Tsiatis (Biometrika 95:679-694, 2008). PMID:22527680

  10. Stationary nonlinear Airy beams

    SciTech Connect

    Lotti, A.; Faccio, D.; Couairon, A.; Papazoglou, D. G.; Panagiotopoulos, P.; Tzortzakis, S.; Abdollahpour, D.

    2011-08-15

    We demonstrate the existence of an additional class of stationary accelerating Airy wave forms that exist in the presence of third-order (Kerr) nonlinearity and nonlinear losses. Numerical simulations and experiments, in agreement with the analytical model, highlight how these stationary solutions sustain the nonlinear evolution of Airy beams. The generic nature of the Airy solution allows extension of these results to other settings, and a variety of applications are suggested.

  11. Organic nonlinear optical materials

    NASA Technical Reports Server (NTRS)

    Umegaki, S.

    1987-01-01

    Recently, it became clear that organic compounds with delocalized pi electrons show a great nonlinear optical response. Especially, secondary nonlinear optical constants of more than 2 digits were often seen in the molecular level compared to the existing inorganic crystals such as LiNbO3. The crystallization was continuously tried. Organic nonlinear optical crystals have a new future as materials for use in the applied physics such as photomodulation, optical frequency transformation, opto-bistabilization, and phase conjugation optics. Organic nonlinear optical materials, e.g., urea, O2NC6H4NH2, I, II, are reviewed with 50 references.

  12. Nonlinear optics at interfaces

    SciTech Connect

    Chen, C.K.

    1980-12-01

    Two aspects of surface nonlinear optics are explored in this thesis. The first part is a theoretical and experimental study of nonlinear intraction of surface plasmons and bulk photons at metal-dielectric interfaces. The second part is a demonstration and study of surface enhanced second harmonic generation at rough metal surfaces. A general formulation for nonlinear interaction of surface plasmons at metal-dielectric interfaces is presented and applied to both second and third order nonlinear processes. Experimental results for coherent second and third harmonic generation by surface plasmons and surface coherent antiStokes Raman spectroscopy (CARS) are shown to be in good agreement with the theory.

  13. Testing the product of slopes in related regressions.

    PubMed

    Morrell, Christopher H; Shetty, Veena; Phillips, Terry; Arumugam, Thiruma V; Mattson, Mark P; Wan, Ruiqian

    2013-09-01

    A study was conducted of the relationships among neuroprotective factors and cytokines in brain tissue of mice at different ages that were examined on the effect of dietary restriction on protection after experimentally induced brain stroke. It was of interest to assess whether the cross-product of the slopes of pairs of variables vs. age was positive or negative. To accomplish this, the product of the slopes was estimated and tested to determine if it is significantly different from zero. Since the measurements are taken on the same animals, the models used must account for the non-independence of the measurements within animals. A number of approaches are illustrated. First a multivariate multiple regression model is employed. Since we are interested in a nonlinear function of the parameters (the product) the delta method is used to obtain the standard error of the estimate of the product. Second, a linear mixed-effects model is fit that allows for the specification of an appropriate correlation structure among repeated measurements. The delta method is again used to obtain the standard error. Finally, a non-linear mixed-effects approach is taken to fit the linear-mixed-effects model and conduct the test. A simulation study investigates the properties of the procedure. PMID:25346580

  14. REGRESSION APPROXIMATIONS FOR TRANSPORT MODEL CONSTRAINT SETS IN COMBINED AQUIFER SIMULATION-OPTIMIZATION STUDIES.

    USGS Publications Warehouse

    Alley, William M.

    1986-01-01

    Problems involving the combined use of contaminant transport models and nonlinear optimization schemes can be very expensive to solve. This paper explores the use of transport models with ordinary regression and regression on ranks to develop approximate response functions of concentrations at critical locations as a function of pumping and recharge at decision wells. These response functions combined with other constraints can often be solved very easily and may suggest reasonable starting points for combined simulation-management modeling or even relatively efficient operating schemes in themselves.

  15. A regression technique for evaluation and quantification for water quality parameters from remote sensing data

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Kuo, C. Y.

    1979-01-01

    The objective of this paper is to define optical physics and/or environmental conditions under which the linear multiple-regression should be applicable. An investigation of the signal-response equations is conducted and the concept is tested by application to actual remote sensing data from a laboratory experiment performed under controlled conditions. Investigation of the signal-response equations shows that the exact solution for a number of optical physics conditions is of the same form as a linearized multiple-regression equation, even if nonlinear contributions from surface reflections, atmospheric constituents, or other water pollutants are included. Limitations on achieving this type of solution are defined.

  16. A comparison of regression and regression-kriging for soil characterization using remote sensing imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In precision agriculture regression has been used widely to quality the relationship between soil attributes and other environmental variables. However, spatial correlation existing in soil samples usually makes the regression model suboptimal. In this study, a regression-kriging method was attemp...

  17. Modelling of filariasis in East Java with Poisson regression and generalized Poisson regression models

    NASA Astrophysics Data System (ADS)

    Darnah

    2016-04-01

    Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.

  18. The Current and Future Use of Ridge Regression for Prediction in Quantitative Genetics.

    PubMed

    de Vlaming, Ronald; Groenen, Patrick J F

    2015-01-01

    In recent years, there has been a considerable amount of research on the use of regularization methods for inference and prediction in quantitative genetics. Such research mostly focuses on selection of markers and shrinkage of their effects. In this review paper, the use of ridge regression for prediction in quantitative genetics using single-nucleotide polymorphism data is discussed. In particular, we consider (i) the theoretical foundations of ridge regression, (ii) its link to commonly used methods in animal breeding, (iii) the computational feasibility, and (iv) the scope for constructing prediction models with nonlinear effects (e.g., dominance and epistasis). Based on a simulation study we gauge the current and future potential of ridge regression for prediction of human traits using genome-wide SNP data. We conclude that, for outcomes with a relatively simple genetic architecture, given current sample sizes in most cohorts (i.e., N < 10,000) the predictive accuracy of ridge regression is slightly higher than the classical genome-wide association study approach of repeated simple regression (i.e., one regression per SNP). However, both capture only a small proportion of the heritability. Nevertheless, we find evidence that for large-scale initiatives, such as biobanks, sample sizes can be achieved where ridge regression compared to the classical approach improves predictive accuracy substantially.

  19. Regression of altitude-produced cardiac hypertrophy.

    NASA Technical Reports Server (NTRS)

    Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.

    1973-01-01

    The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.

  20. Trends in Mean and Variability of Hydrologic Series Using Regression

    NASA Astrophysics Data System (ADS)

    Read, L.; Vogel, R.; Lacombe, G.

    2013-12-01

    Concern for design and prediction under nonstationarity has led to research into trend detection and development of nonstationary probabilistic models. This work introduces a method for using least squares regression to test for trends in the mean and variance, which can be an appropriate tool for water managers and decision makers. Regression has the advantages of (1) ease of application, (2) application for both nonlinear and linear trends, (3) graphic visualization of trends, (4) analytically estimating the power of the trend, and (5) analytically estimating the prediction intervals related to trend extrapolation. Though this general method can be applied for a variety of hydrologic variables, we present a case based on annual maximum flows from the Mekong basin. We outline a generalized method for hypothesis testing and modeling trends for a log normal variable. We also document development of a nonstationary model to assess the impact of trends in both the mean and variance on the future magnitude and frequency of floods in the Mekong basin.

  1. Robust visual tracking via speedup multiple kernel ridge regression

    NASA Astrophysics Data System (ADS)

    Qian, Cheng; Breckon, Toby P.; Li, Hui

    2015-09-01

    Most of the tracking methods attempt to build up feature spaces to represent the appearance of a target. However, limited by the complex structure of the distribution of features, the feature spaces constructed in a linear manner cannot characterize the nonlinear structure well. We propose an appearance model based on kernel ridge regression for visual tracking. Dense sampling is fulfilled around the target image patches to collect the training samples. In order to obtain a kernel space in favor of describing the target appearance, multiple kernel learning is introduced into the selection of kernels. Under the framework, instead of a single kernel, a linear combination of kernels is learned from the training samples to create a kernel space. Resorting to the circulant property of a kernel matrix, a fast interpolate iterative algorithm is developed to seek coefficients that are assigned to these kernels so as to give an optimal combination. After the regression function is learned, all candidate image patches gathered are taken as the input of the function, and the candidate with the maximal response is regarded as the object image patch. Extensive experimental results demonstrate that the proposed method outperforms other state-of-the-art tracking methods.

  2. Fuzzy regression modeling for tool performance prediction and degradation detection.

    PubMed

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  3. Nonparametric survival analysis using Bayesian Additive Regression Trees (BART).

    PubMed

    Sparapani, Rodney A; Logan, Brent R; McCulloch, Robert E; Laud, Purushottam W

    2016-07-20

    Bayesian additive regression trees (BART) provide a framework for flexible nonparametric modeling of relationships of covariates to outcomes. Recently, BART models have been shown to provide excellent predictive performance, for both continuous and binary outcomes, and exceeding that of its competitors. Software is also readily available for such outcomes. In this article, we introduce modeling that extends the usefulness of BART in medical applications by addressing needs arising in survival analysis. Simulation studies of one-sample and two-sample scenarios, in comparison with long-standing traditional methods, establish face validity of the new approach. We then demonstrate the model's ability to accommodate data from complex regression models with a simulation study of a nonproportional hazards scenario with crossing survival functions and survival function estimation in a scenario where hazards are multiplicatively modified by a highly nonlinear function of the covariates. Using data from a recently published study of patients undergoing hematopoietic stem cell transplantation, we illustrate the use and some advantages of the proposed method in medical investigations. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26854022

  4. A regression technique for evaluation and quantification for water quality parameters from remote sensing data

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Kuo, C. Y.

    1979-01-01

    The paper attempts to define optical physics and/or environmental conditions under which the linear multiple-regression should be applicable. It is reported that investigation of the signal response shows that the exact solution for a number of optical physics conditions is of the same form as a linearized multiple-regression equation, even if nonlinear contributions from surface reflections, atmospheric constituents, or other water pollutants are included. Limitations on achieving this type of solution are defined. Laboratory data are used to demonstrate that the technique is applicable to water mixtures which contain constituents with both linear and nonlinear radiance gradients. Finally, it is concluded that instrument noise, ground-truth placement, and time lapse between remote sensor overpass and water sample operations are serious barriers to successful use of the technique.

  5. Nonlinear Simulation of the Tooth Enamel Spectrum for EPR Dosimetry

    NASA Astrophysics Data System (ADS)

    Kirillov, V. A.; Dubovsky, S. V.

    2016-07-01

    Software was developed where initial EPR spectra of tooth enamel were deconvoluted based on nonlinear simulation, line shapes and signal amplitudes in the model initial spectrum were calculated, the regression coefficient was evaluated, and individual spectra were summed. Software validation demonstrated that doses calculated using it agreed excellently with the applied radiation doses and the doses reconstructed by the method of additive doses.

  6. Non-linear calibration models for near infrared spectroscopy.

    PubMed

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-02-27

    Different calibration techniques are available for spectroscopic applications that show nonlinear behavior. This comprehensive comparative study presents a comparison of different nonlinear calibration techniques: kernel PLS (KPLS), support vector machines (SVM), least-squares SVM (LS-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS-SVM is also attractive due to its good predictive performance for both linear and nonlinear calibrations.

  7. Nonlinear Optics and Applications

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin A. (Editor); Frazier, Donald O. (Editor)

    2007-01-01

    Nonlinear optics is the result of laser beam interaction with materials and started with the advent of lasers in the early 1960s. The field is growing daily and plays a major role in emerging photonic technology. Nonlinear optics play a major role in many of the optical applications such as optical signal processing, optical computers, ultrafast switches, ultra-short pulsed lasers, sensors, laser amplifiers, and many others. This special review volume on Nonlinear Optics and Applications is intended for those who want to be aware of the most recent technology. This book presents a survey of the recent advances of nonlinear optical applications. Emphasis will be on novel devices and materials, switching technology, optical computing, and important experimental results. Recent developments in topics which are of historical interest to researchers, and in the same time of potential use in the fields of all-optical communication and computing technologies, are also included. Additionally, a few new related topics which might provoke discussion are presented. The book includes chapters on nonlinear optics and applications; the nonlinear Schrodinger and associated equations that model spatio-temporal propagation; the supercontinuum light source; wideband ultrashort pulse fiber laser sources; lattice fabrication as well as their linear and nonlinear light guiding properties; the second-order EO effect (Pockels), the third-order (Kerr) and thermo-optical effects in optical waveguides and their applications in optical communication; and, the effect of magnetic field and its role in nonlinear optics, among other chapters.

  8. Nonlinearly realized extended supergravity

    SciTech Connect

    Izawa, K.-I.; Nakai, Y.; Takahashi, Ryo

    2010-10-01

    We provide a nonlinear realization of supergravity with an arbitrary number of supersymmetries by means of coset construction. The number of gravitino degrees of freedom counts the number of supersymmetries, which will possibly be probed in future experiments. We also consider Goldstino embedding in the construction to discuss the relation to nonlinear realizations with rigid supersymmetries.

  9. Friction and nonlinear dynamics

    NASA Astrophysics Data System (ADS)

    Manini, N.; Braun, O. M.; Tosatti, E.; Guerra, R.; Vanossi, A.

    2016-07-01

    The nonlinear dynamics associated with sliding friction forms a broad interdisciplinary research field that involves complex dynamical processes and patterns covering a broad range of time and length scales. Progress in experimental techniques and computational resources has stimulated the development of more refined and accurate mathematical and numerical models, capable of capturing many of the essentially nonlinear phenomena involved in friction.

  10. Mathematical nonlinear optics

    NASA Astrophysics Data System (ADS)

    McLaughlin, David W.

    1994-01-01

    The principal investigator, together with two post-doctoral fellows, several graduate students, and colleagues, has applied the modern mathematical theory of nonlinear waves to problems in nonlinear optics. Projects included the interaction of laser light with nematic liquid crystals, propagation through random nonlinear media, cross polarization instabilities and optical shocks for propagation along nonlinear optical fibers, and the dynamics of bistable optical switches coupled through both diffusion and diffraction. In the first project the extremely strong nonlinear response of a CW laser beam in a nematic liquid crystal medium produced striking undulation and filamentation of the CW beam which was observed experimentally and explained theoretically. In the second project the interaction of randomness with nonlinearity was investigated, as well as an effective randomness due to the simultaneous presence of many nonlinear instabilities. In the polarization problems theoretical hyperbolic structure (instabilities and homoclinic orbits) in the coupled nonlinear Schroedinger (NLS) equations was identified and used to explain cross polarization instabilities in both the focusing and defocusing cases, as well as to describe optical shocking phenomena. For the coupled bistable optical switches, a numerical code was carefully developed in two spatial and one temporal dimensions. The code was used to study the decay of temporal transients to 'on-off' steady states in a geometry which includes forward and backward longitudinal propagation, together with one dimensional transverse coupling of both electromagnetic diffraction and carrier diffusion.

  11. Spacecraft nonlinear control

    NASA Technical Reports Server (NTRS)

    Sheen, Jyh-Jong; Bishop, Robert H.

    1992-01-01

    The feedback linearization technique is applied to the problem of spacecraft attitude control and momentum management with control moment gyros (CMGs). The feedback linearization consists of a coordinate transformation, which transforms the system to a companion form, and a nonlinear feedback control law to cancel the nonlinear dynamics resulting in a linear equivalent model. Pole placement techniques are then used to place the closed-loop poles. The coordinate transformation proposed here evolves from three output functions of relative degree four, three, and two, respectively. The nonlinear feedback control law is presented. Stability in a neighborhood of a controllable torque equilibrium attitude (TEA) is guaranteed and this fact is demonstrated by the simulation results. An investigation of the nonlinear control law shows that singularities exist in the state space outside the neighborhood of the controllable TEA. The nonlinear control law is simplified by a standard linearization technique and it is shown that the linearized nonlinear controller provides a natural way to select control gains for the multiple-input, multiple-output system. Simulation results using the linearized nonlinear controller show good performance relative to the nonlinear controller in the neighborhood of the TEA.

  12. Lasers for nonlinear microscopy.

    PubMed

    Wise, Frank

    2013-03-01

    Various versions of nonlinear microscopy are revolutionizing the life sciences, almost all of which are made possible because of the development of ultrafast lasers. In this article, the main properties and technical features of short-pulse lasers used in nonlinear microscopy are summarized. Recent research results on fiber lasers that will impact future instruments are also discussed.

  13. REGRESSION ESTIMATES FOR TOPOLOGICAL-HYDROGRAPH INPUT.

    USGS Publications Warehouse

    Karlinger, Michael R.; Guertin, D. Phillip; Troutman, Brent M.

    1988-01-01

    Physiographic, hydrologic, and rainfall data from 18 small drainage basins in semiarid, central Wyoming were used to calibrate topological, unit-hydrograph models for celerity, the average rate of travel of a flood wave through the basin. The data set consisted of basin characteristics and hydrologic data for the 18 basins and rainfall data for 68 storms. Calibrated values of celerity and peak discharges subsequently were regressed as a function of the basin characteristics and excess rainfall volume. Predicted values obtained in this way can be used as input for estimating hydrographs in ungaged basins. The regression models included ordinary least-squares and seemingly unrelated regression. This latter regression model jointly estimated the celerity and peak discharge.

  14. TWSVR: Regression via Twin Support Vector Machine.

    PubMed

    Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh

    2016-02-01

    Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets.

  15. TWSVR: Regression via Twin Support Vector Machine.

    PubMed

    Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh

    2016-02-01

    Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets. PMID:26624223

  16. Some Simple Computational Formulas for Multiple Regression

    ERIC Educational Resources Information Center

    Aiken, Lewis R., Jr.

    1974-01-01

    Short-cut formulas are presented for direct computation of the beta weights, the standard errors of the beta weights, and the multiple correlation coefficient for multiple regression problems involving three independent variables and one dependent variable. (Author)

  17. The Geometry of Enhancement in Multiple Regression

    ERIC Educational Resources Information Center

    Waller, Niels G.

    2011-01-01

    In linear multiple regression, "enhancement" is said to occur when R[superscript 2] = b[prime]r greater than r[prime]r, where b is a p x 1 vector of standardized regression coefficients and r is a p x 1 vector of correlations between a criterion y and a set of standardized regressors, x. When p = 1 then b [is congruent to] r and enhancement cannot…

  18. There is No Quantum Regression Theorem

    SciTech Connect

    Ford, G.W.; OConnell, R.F.

    1996-07-01

    The Onsager regression hypothesis states that the regression of fluctuations is governed by macroscopic equations describing the approach to equilibrium. It is here asserted that this hypothesis fails in the quantum case. This is shown first by explicit calculation for the example of quantum Brownian motion of an oscillator and then in general from the fluctuation-dissipation theorem. It is asserted that the correct generalization of the Onsager hypothesis is the fluctuation-dissipation theorem. {copyright} {ital 1996 The American Physical Society.}

  19. Nonlinear filter design

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Whitney, Paul

    1987-01-01

    A technique for identifying nonlinear systems was introduced, beginning with a single input-single output system. Assuming the system is initially at rest, the first kernel (first convolution integral in the continuous case or first convolution sum in the discrete case) was calculated. A controllable and observable linear realization was then obtained in a particular canonical form. The actual nonlinear system was probed with an appropriate input (or inputs) and the output (or outputs) determined. For the linear system, the input was computed that produces the same output. In the difference between the inputs to the nonlinear and linear systems, basic information was found about the nonlinear system. There is an interesting class of nonlinear systems for which this type of identification scheme should prove to be accurate.

  20. Synthesizing regression results: a factored likelihood method.

    PubMed

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-06-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported in the regression studies to calculate synthesized standardized slopes. It uses available correlations to estimate missing ones through a series of regressions, allowing us to synthesize correlations among variables as if each included study contained all the same variables. Great accuracy and stability of this method under fixed-effects models were found through Monte Carlo simulation. An example was provided to demonstrate the steps for calculating the synthesized slopes through sweep operators. By rearranging the predictors in the included regression models or omitting a relatively small number of correlations from those models, we can easily apply the factored likelihood method to many situations involving synthesis of linear models. Limitations and other possible methods for synthesizing more complicated models are discussed. Copyright © 2012 John Wiley & Sons, Ltd. PMID:26053653

  1. Post-processing through linear regression

    NASA Astrophysics Data System (ADS)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  2. Review of the Software Package "Scientist": Mathematical Modeling/Differential and Nonlinear Equations.

    ERIC Educational Resources Information Center

    Scheidt, Douglas M.

    1995-01-01

    Reviews three functions of the "Scientist" software package useful for the social sciences: nonlinear curve fitting, parameter estimation, and data/regression plotting. Social scientists are likely to find limitations and unfamiliar procedures in "Scientist". Its value lies in its visual presentation of data and regression curves and the…

  3. A Convenient Spreadsheet Method for Fitting the Nonlinear Langmuir Equation to Sorption Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Langmuir model is commonly used model for describing solute and metal sorption to soils. This model can be fit to data using nonlinear regression or, alternatively, a linearized version of the model can be fit to the data using linear regression. Although linearized versions of the Langmuir equa...

  4. Efficient Inference of Parsimonious Phenomenological Models of Cellular Dynamics Using S-Systems and Alternating Regression

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    The nonlinearity of dynamics in systems biology makes it hard to infer them from experimental data. Simple linear models are computationally efficient, but cannot incorporate these important nonlinearities. An adaptive method based on the S-system formalism, which is a sensible representation of nonlinear mass-action kinetics typically found in cellular dynamics, maintains the efficiency of linear regression. We combine this approach with adaptive model selection to obtain efficient and parsimonious representations of cellular dynamics. The approach is tested by inferring the dynamics of yeast glycolysis from simulated data. With little computing time, it produces dynamical models with high predictive power and with structural complexity adapted to the difficulty of the inference problem. PMID:25806510

  5. Influence of storm magnitude and watershed size on runoff nonlinearity

    NASA Astrophysics Data System (ADS)

    Lee, Kwan Tun; Huang, Jen-Kuo

    2016-06-01

    The inherent nonlinear characteristics of the watershed runoff process related to storm magnitude and watershed size are discussed in detail in this study. The first type of nonlinearity is referred to rainfall-runoff dynamic process and the second type is with respect to a Power-law relation between peak discharge and upstream drainage area. The dynamic nonlinearity induced by storm magnitude was first demonstrated by inspecting rainfall-runoff records at three watersheds in Taiwan. Then the derivation of the watershed unit hydrograph (UH) using two linear hydrological models shows that the peak discharge and time to peak discharge that characterize the shape of UH vary event-to-event. Hence, the intention of deriving a unique and universal UH for all rainfall-runoff simulation cases is questionable. In contrast, the UHs by the other two adopted nonlinear hydrological models were responsive to rainfall intensity without relying on linear proportion principle, and are excellent in presenting dynamic nonlinearity. Based on the two-segment regression, the scaling nonlinearity between peak discharge and drainage area was investigated by analyzing the variation of Power-law exponent. The results demonstrate that the scaling nonlinearity is particularly significant for a watershed having larger area and subjecting to a small-size of storm. For three study watersheds, a large tributary that contributes relatively great drainage area or inflow is found to cause a transition break in scaling relationship and convert the scaling relationship from linearity to nonlinearity.

  6. Nonlinear cochlear mechanics.

    PubMed

    Zweig, George

    2016-05-01

    An earlier paper characterizing the linear mechanical response of the organ of Corti [J. Acoust. Soc. Am. 138, 1102-1121 (2015)] is extended to the nonlinear domain. Assuming the existence of nonlinear oscillators nonlocally coupled through the pressure they help create, the oscillator equations are derived and examined when the stimuli are modulated tones and clicks. The nonlinearities are constrained by the requirements of oscillator stability and the invariance of zero crossings in the click response to changes in click amplitude. The nonlinear oscillator equations for tones are solved in terms of the fluid pressure that drives them, and its time derivative, presumably a proxy for forces created by outer hair cells. The pressure equation is reduced to quadrature, the integrand depending on the oscillators' responses. The resulting nonlocally coupled nonlinear equations for the pressure, and oscillator amplitudes and phases, are solved numerically in terms of the fluid pressure at the stapes. Methods for determining the nonlinear damping directly from measurements are described. Once the oscillators have been characterized from their tone and click responses, the mechanical response of the cochlea to natural sounds may be computed numerically. Signal processing inspired by cochlear mechanics opens up a new area of nonlocal nonlinear time-frequency analysis.

  7. Nonlinear cochlear mechanics.

    PubMed

    Zweig, George

    2016-05-01

    An earlier paper characterizing the linear mechanical response of the organ of Corti [J. Acoust. Soc. Am. 138, 1102-1121 (2015)] is extended to the nonlinear domain. Assuming the existence of nonlinear oscillators nonlocally coupled through the pressure they help create, the oscillator equations are derived and examined when the stimuli are modulated tones and clicks. The nonlinearities are constrained by the requirements of oscillator stability and the invariance of zero crossings in the click response to changes in click amplitude. The nonlinear oscillator equations for tones are solved in terms of the fluid pressure that drives them, and its time derivative, presumably a proxy for forces created by outer hair cells. The pressure equation is reduced to quadrature, the integrand depending on the oscillators' responses. The resulting nonlocally coupled nonlinear equations for the pressure, and oscillator amplitudes and phases, are solved numerically in terms of the fluid pressure at the stapes. Methods for determining the nonlinear damping directly from measurements are described. Once the oscillators have been characterized from their tone and click responses, the mechanical response of the cochlea to natural sounds may be computed numerically. Signal processing inspired by cochlear mechanics opens up a new area of nonlocal nonlinear time-frequency analysis. PMID:27250151

  8. Logistic regression by means of evolutionary radial basis function neural networks.

    PubMed

    Gutierrez, Pedro Antonio; Hervas-Martinez, César; Martinez-Estudillo, Francisco J

    2011-02-01

    This paper proposes a hybrid multilogistic methodology, named logistic regression using initial and radial basis function (RBF) covariates. The process for obtaining the coefficients is carried out in three steps. First, an evolutionary programming (EP) algorithm is applied, in order to produce an RBF neural network (RBFNN) with a reduced number of RBF transformations and the simplest structure possible. Then, the initial attribute space (or, as commonly known as in logistic regression literature, the covariate space) is transformed by adding the nonlinear transformations of the input variables given by the RBFs of the best individual in the final generation. Finally, a maximum likelihood optimization method determines the coefficients associated with a multilogistic regression model built in this augmented covariate space. In this final step, two different multilogistic regression algorithms are applied: one considers all initial and RBF covariates (multilogistic initial-RBF regression) and the other one incrementally constructs the model and applies cross validation, resulting in an automatic covariate selection [simplelogistic initial-RBF regression (SLIRBF)]. Both methods include a regularization parameter, which has been also optimized. The methodology proposed is tested using 18 benchmark classification problems from well-known machine learning problems and two real agronomical problems. The results are compared with the corresponding multilogistic regression methods applied to the initial covariate space, to the RBFNNs obtained by the EP algorithm, and to other probabilistic classifiers, including different RBFNN design methods [e.g., relaxed variable kernel density estimation, support vector machines, a sparse classifier (sparse multinomial logistic regression)] and a procedure similar to SLIRBF but using product unit basis functions. The SLIRBF models are found to be competitive when compared with the corresponding multilogistic regression methods and the

  9. Nonlinear GARCH model and 1 / f noise

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Ruseckas, J.

    2015-06-01

    Auto-regressive conditionally heteroskedastic (ARCH) family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH(1,1) process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH(1,1) process has power law distribution, but its power spectral density is Brownian noise-like. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the 1 /fβ form, including 1 / f noise.

  10. Nonlinear ordinary difference equations

    NASA Technical Reports Server (NTRS)

    Caughey, T. K.

    1979-01-01

    Future space vehicles will be relatively large and flexible, and active control will be necessary to maintain geometrical configuration. While the stresses and strains in these space vehicles are not expected to be excessively large, their cumulative effects will cause significant geometrical nonlinearities to appear in the equations of motion, in addition to the nonlinearities caused by material properties. Since the only effective tool for the analysis of such large complex structures is the digital computer, it will be necessary to gain a better understanding of the nonlinear ordinary difference equations which result from the time discretization of the semidiscrete equations of motion for such structures.

  11. Nonlinear mill control.

    PubMed

    Martin, G; McGarel, S

    2001-01-01

    A mill is a mechanical device that grinds mined or processed material into small particles. The process is known to display significant deadtime, and, more notably, severe nonlinear behavior. Over the past 25 years attempts at continuous mill control have met varying degrees of failure, mainly due to model mismatch caused by changes in the mill process gains. This paper describes an on-line control application on a closed-circuit cement mill that uses nonlinear model predictive control technology. The nonlinear gains for the control model are calculated on-line from a neural network model of the process.

  12. Multipole nonlinearity of metamaterials

    SciTech Connect

    Petschulat, J.; Chipouline, A.; Tuennermann, A.; Pertsch, T.; Menzel, C.; Rockstuhl, C.; Lederer, F.

    2009-12-15

    We report on the linear and nonlinear optical response of metamaterials evoked by first- and second-order multipoles. The analytical ground on which our approach is based permits for new insights into the functionality of metamaterials. For the sake of clarity we focus here on a key geometry, namely, the split-ring resonator, although the introduced formalism can be applied to arbitrary structures. We derive the equations that describe linear and nonlinear light propagation where special emphasis is put on second-harmonic generation. This contribution basically aims at stretching versatile and existing concepts to describe light propagation in nonlinear media toward the realm of metamaterials.

  13. An empirical evaluation of spatial regression models

    NASA Astrophysics Data System (ADS)

    Gao, Xiaolu; Asami, Yasushi; Chung, Chang-Jo F.

    2006-10-01

    Conventional statistical methods are often ineffective to evaluate spatial regression models. One reason is that spatial regression models usually have more parameters or smaller sample sizes than a simple model, so their degree of freedom is reduced. Thus, it is often unlikely to evaluate them based on traditional tests. Another reason, which is theoretically associated with statistical methods, is that statistical criteria are crucially dependent on such assumptions as normality, independence, and homogeneity. This may create problems because the assumptions are open for testing. In view of these problems, this paper proposes an alternative empirical evaluation method. To illustrate the idea, a few hedonic regression models for a house and land price data set are evaluated, including a simple, ordinary linear regression model and three spatial models. Their performance as to how well the price of the house and land can be predicted is examined. With a cross-validation technique, the prices at each sample point are predicted with a model estimated with the samples excluding the one being concerned. Then, empirical criteria are established whereby the predicted prices are compared with the real, observed prices. The proposed method provides an objective guidance for the selection of a suitable model specification for a data set. Moreover, the method is seen as an alternative way to test the significance of the spatial relationships being concerned in spatial regression models.

  14. Response-adaptive regression for longitudinal data.

    PubMed

    Wu, Shuang; Müller, Hans-Georg

    2011-09-01

    We propose a response-adaptive model for functional linear regression, which is adapted to sparsely sampled longitudinal responses. Our method aims at predicting response trajectories and models the regression relationship by directly conditioning the sparse and irregular observations of the response on the predictor, which can be of scalar, vector, or functional type. This obliterates the need to model the response trajectories, a task that is challenging for sparse longitudinal data and was previously required for functional regression implementations for longitudinal data. The proposed approach turns out to be superior compared to previous functional regression approaches in terms of prediction error. It encompasses a variety of regression settings that are relevant for the functional modeling of longitudinal data in the life sciences. The improved prediction of response trajectories with the proposed response-adaptive approach is illustrated for a longitudinal study of Kiwi weight growth and by an analysis of the dynamic relationship between viral load and CD4 cell counts observed in AIDS clinical trials. PMID:21133880

  15. Mental chronometry with simple linear regression.

    PubMed

    Chen, J Y

    1997-10-01

    Typically, mental chronometry is performed by means of introducing an independent variable postulated to affect selectively some stage of a presumed multistage process. However, the effect could be a global one that spreads proportionally over all stages of the process. Currently, there is no method to test this possibility although simple linear regression might serve the purpose. In the present study, the regression approach was tested with tasks (memory scanning and mental rotation) that involved a selective effect and with a task (word superiority effect) that involved a global effect, by the dominant theories. The results indicate (1) the manipulation of the size of a memory set or of angular disparity affects the intercept of the regression function that relates the times for memory scanning with different set sizes or for mental rotation with different angular disparities and (2) the manipulation of context affects the slope of the regression function that relates the times for detecting a target character under word and nonword conditions. These ratify the regression approach as a useful method for doing mental chronometry. PMID:9347535

  16. Hierarchical regression for analyses of multiple outcomes.

    PubMed

    Richardson, David B; Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R; Chu, Haitao

    2015-09-01

    In cohort mortality studies, there often is interest in associations between an exposure of primary interest and mortality due to a range of different causes. A standard approach to such analyses involves fitting a separate regression model for each type of outcome. However, the statistical precision of some estimated associations may be poor because of sparse data. In this paper, we describe a hierarchical regression model for estimation of parameters describing outcome-specific relative rate functions and associated credible intervals. The proposed model uses background stratification to provide flexible control for the outcome-specific associations of potential confounders, and it employs a hierarchical "shrinkage" approach to stabilize estimates of an exposure's associations with mortality due to different causes of death. The approach is illustrated in analyses of cancer mortality in 2 cohorts: a cohort of dioxin-exposed US chemical workers and a cohort of radiation-exposed Japanese atomic bomb survivors. Compared with standard regression estimates of associations, hierarchical regression yielded estimates with improved precision that tended to have less extreme values. The hierarchical regression approach also allowed the fitting of models with effect-measure modification. The proposed hierarchical approach can yield estimates of association that are more precise than conventional estimates when one wishes to estimate associations with multiple outcomes. PMID:26232395

  17. MULTILINEAR TENSOR REGRESSION FOR LONGITUDINAL RELATIONAL DATA

    PubMed Central

    Hoff, Peter D.

    2016-01-01

    A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kronecker-structured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity. PMID:27458495

  18. Estimation of diffusion coefficients from voltammetric signals by support vector and gaussian process regression

    PubMed Central

    2014-01-01

    Background Support vector regression (SVR) and Gaussian process regression (GPR) were used for the analysis of electroanalytical experimental data to estimate diffusion coefficients. Results For simulated cyclic voltammograms based on the EC, Eqr, and EqrC mechanisms these regression algorithms in combination with nonlinear kernel/covariance functions yielded diffusion coefficients with higher accuracy as compared to the standard approach of calculating diffusion coefficients relying on the Nicholson-Shain equation. The level of accuracy achieved by SVR and GPR is virtually independent of the rate constants governing the respective reaction steps. Further, the reduction of high-dimensional voltammetric signals by manual selection of typical voltammetric peak features decreased the performance of both regression algorithms compared to a reduction by downsampling or principal component analysis. After training on simulated data sets, diffusion coefficients were estimated by the regression algorithms for experimental data comprising voltammetric signals for three organometallic complexes. Conclusions Estimated diffusion coefficients closely matched the values determined by the parameter fitting method, but reduced the required computational time considerably for one of the reaction mechanisms. The automated processing of voltammograms according to the regression algorithms yields better results than the conventional analysis of peak-related data. PMID:24987463

  19. Fatigue design of a cellular phone folder using regression model-based multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Kim, Young Gyun; Lee, Jongsoo

    2016-08-01

    In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.

  20. Uncertainty quantification in DIC with Kriging regression

    NASA Astrophysics Data System (ADS)

    Wang, Dezhi; DiazDelaO, F. A.; Wang, Weizhuo; Lin, Xiaoshan; Patterson, Eann A.; Mottershead, John E.

    2016-03-01

    A Kriging regression model is developed as a post-processing technique for the treatment of measurement uncertainty in classical subset-based Digital Image Correlation (DIC). Regression is achieved by regularising the sample-point correlation matrix using a local, subset-based, assessment of the measurement error with assumed statistical normality and based on the Sum of Squared Differences (SSD) criterion. This leads to a Kriging-regression model in the form of a Gaussian process representing uncertainty on the Kriging estimate of the measured displacement field. The method is demonstrated using numerical and experimental examples. Kriging estimates of displacement fields are shown to be in excellent agreement with 'true' values for the numerical cases and in the experimental example uncertainty quantification is carried out using the Gaussian random process that forms part of the Kriging model. The root mean square error (RMSE) on the estimated displacements is produced and standard deviations on local strain estimates are determined.

  1. A tutorial on Bayesian Normal linear regression

    NASA Astrophysics Data System (ADS)

    Klauenberg, Katy; Wübbeler, Gerd; Mickan, Bodo; Harris, Peter; Elster, Clemens

    2015-12-01

    Regression is a common task in metrology and often applied to calibrate instruments, evaluate inter-laboratory comparisons or determine fundamental constants, for example. Yet, a regression model cannot be uniquely formulated as a measurement function, and consequently the Guide to the Expression of Uncertainty in Measurement (GUM) and its supplements are not applicable directly. Bayesian inference, however, is well suited to regression tasks, and has the advantage of accounting for additional a priori information, which typically robustifies analyses. Furthermore, it is anticipated that future revisions of the GUM shall also embrace the Bayesian view. Guidance on Bayesian inference for regression tasks is largely lacking in metrology. For linear regression models with Gaussian measurement errors this tutorial gives explicit guidance. Divided into three steps, the tutorial first illustrates how a priori knowledge, which is available from previous experiments, can be translated into prior distributions from a specific class. These prior distributions have the advantage of yielding analytical, closed form results, thus avoiding the need to apply numerical methods such as Markov Chain Monte Carlo. Secondly, formulas for the posterior results are given, explained and illustrated, and software implementations are provided. In the third step, Bayesian tools are used to assess the assumptions behind the suggested approach. These three steps (prior elicitation, posterior calculation, and robustness to prior uncertainty and model adequacy) are critical to Bayesian inference. The general guidance given here for Normal linear regression tasks is accompanied by a simple, but real-world, metrological example. The calibration of a flow device serves as a running example and illustrates the three steps. It is shown that prior knowledge from previous calibrations of the same sonic nozzle enables robust predictions even for extrapolations.

  2. Nonlinear optomechanical pressure

    NASA Astrophysics Data System (ADS)

    Conti, Claudio; Boyd, Robert

    2014-03-01

    A transparent material exhibits ultrafast optical nonlinearity and is subject to optical pressure if irradiated by a laser beam. However, the effect of nonlinearity on optical pressure is often overlooked, even if a nonlinear optical pressure may be potentially employed in many applications, such as optical manipulation, biophysics, cavity optomechanics, quantum optics, and optical tractors, and is relevant in fundamental problems such as the Abraham-Minkoswky dilemma or the Casimir effect. Here, we show that an ultrafast nonlinear polarization gives indeed a contribution to the optical pressure that also is negative in certain spectral ranges; the theoretical analysis is confirmed by first-principles simulations. An order-of-magnitude estimate shows that the effect can be observable by measuring the deflection of a membrane made by graphene.

  3. Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Nonlinear structural analysis techniques for engine structures and components are addressed. The finite element method and boundary element method are discussed in terms of stress and structural analyses of shells, plates, and laminates.

  4. Library for Nonlinear Optimization

    2001-10-09

    OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.

  5. Nonlinear Dynamics in Cardiology

    PubMed Central

    Krogh-Madsen, Trine; Christini, David J.

    2013-01-01

    The dynamics of many cardiac arrhythmias, as well as the nature of transitions between different heart rhythms, have long been considered evidence of nonlinear phenomena playing a direct role in cardiac arrhythmogenesis. In most types of cardiac disease, the pathology develops slowly and gradually, often over many years. In contrast, arrhythmias often occur suddenly. In nonlinear systems, sudden changes in qualitative dynamics can, counter-intuitively, result from a gradual change in a system parameter –this is known as a bifurcation. Here, we review how nonlinearities in cardiac electrophysiology influence normal and abnormal rhythms and how bifurcations change the dynamics. In particular, we focus on the many recent developments in computational modeling at the cellular level focused on intracellular calcium dynamics. We discuss two areas where recent experimental and modeling work have suggested the importance of nonlinearities in calcium dynamics: repolarization alternans and pacemaker cell automaticity. PMID:22524390

  6. Nonlinear regression method for estimating neutral wind and temperature from Fabry-Perot interferometer data.

    PubMed

    Harding, Brian J; Gehrels, Thomas W; Makela, Jonathan J

    2014-02-01

    The Earth's thermosphere plays a critical role in driving electrodynamic processes in the ionosphere and in transferring solar energy to the atmosphere, yet measurements of thermospheric state parameters, such as wind and temperature, are sparse. One of the most popular techniques for measuring these parameters is to use a Fabry-Perot interferometer to monitor the Doppler width and breadth of naturally occurring airglow emissions in the thermosphere. In this work, we present a technique for estimating upper-atmospheric winds and temperatures from images of Fabry-Perot fringes captured by a CCD detector. We estimate instrument parameters from fringe patterns of a frequency-stabilized laser, and we use these parameters to estimate winds and temperatures from airglow fringe patterns. A unique feature of this technique is the model used for the laser and airglow fringe patterns, which fits all fringes simultaneously and attempts to model the effects of optical defects. This technique yields accurate estimates for winds, temperatures, and the associated uncertainties in these parameters, as we show with a Monte Carlo simulation. PMID:24514183

  7. Studying Individual Differences in Predictability with Gamma Regression and Nonlinear Multilevel Models

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2010-01-01

    Statistical prediction remains an important tool for decisions in a variety of disciplines. An equally important issue is identifying factors that contribute to more or less accurate predictions. The time series literature includes well developed methods for studying predictability and volatility over time. This article develops…

  8. A Nonlinear Regression Model Estimating Single Source Concentrations of Primary and Secondarily Formed 2.5

    EPA Science Inventory

    Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...

  9. Nonlinear regression method for estimating neutral wind and temperature from Fabry-Perot interferometer data.

    PubMed

    Harding, Brian J; Gehrels, Thomas W; Makela, Jonathan J

    2014-02-01

    The Earth's thermosphere plays a critical role in driving electrodynamic processes in the ionosphere and in transferring solar energy to the atmosphere, yet measurements of thermospheric state parameters, such as wind and temperature, are sparse. One of the most popular techniques for measuring these parameters is to use a Fabry-Perot interferometer to monitor the Doppler width and breadth of naturally occurring airglow emissions in the thermosphere. In this work, we present a technique for estimating upper-atmospheric winds and temperatures from images of Fabry-Perot fringes captured by a CCD detector. We estimate instrument parameters from fringe patterns of a frequency-stabilized laser, and we use these parameters to estimate winds and temperatures from airglow fringe patterns. A unique feature of this technique is the model used for the laser and airglow fringe patterns, which fits all fringes simultaneously and attempts to model the effects of optical defects. This technique yields accurate estimates for winds, temperatures, and the associated uncertainties in these parameters, as we show with a Monte Carlo simulation.

  10. Photonic Nonlinear Transient Computing with Multiple-Delay Wavelength Dynamics

    NASA Astrophysics Data System (ADS)

    Martinenghi, Romain; Rybalko, Sergei; Jacquot, Maxime; Chembo, Yanne K.; Larger, Laurent

    2012-06-01

    We report on the experimental demonstration of a hybrid optoelectronic neuromorphic computer based on a complex nonlinear wavelength dynamics including multiple delayed feedbacks with randomly defined weights. This neuromorphic approach is based on a new paradigm of a brain-inspired computational unit, intrinsically differing from Turing machines. This recent paradigm consists in expanding the input information to be processed into a higher dimensional phase space, through the nonlinear transient response of a complex dynamics excited by the input information. The computed output is then extracted via a linear separation of the transient trajectory in the complex phase space. The hyperplane separation is derived from a learning phase consisting of the resolution of a regression problem. The processing capability originates from the nonlinear transient, resulting in nonlinear transient computing. The computational performance is successfully evaluated on a standard benchmark test, namely, a spoken digit recognition task.

  11. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps.

    PubMed

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard; Hansen, Lars Kai

    2011-04-01

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification models. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher discriminant, and the SVM, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging.

  12. MLREG, stepwise multiple linear regression program

    SciTech Connect

    Carder, J.H.

    1981-09-01

    This program is written in FORTRAN for an IBM computer and performs multiple linear regressions according to a stepwise procedure. The program transforms and combines old variables into new variables, prints input and transformed data, sums, raw sums or squares, residual sum of squares, means and standard deviations, correlation coefficients, regression results at each step, ANOVA at each step, and predicted response results at each step. This package contains an EXEC used to execute the program,sample input data and output listing, source listing, documentation, and card decks containing the EXEC sample input, and FORTRAN source.

  13. Salience Assignment for Multiple-Instance Regression

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Lane, Terran

    2007-01-01

    We present a Multiple-Instance Learning (MIL) algorithm for determining the salience of each item in each bag with respect to the bag's real-valued label. We use an alternating-projections constrained optimization approach to simultaneously learn a regression model and estimate all salience values. We evaluate this algorithm on a significant real-world problem, crop yield modeling, and demonstrate that it provides more extensive, intuitive, and stable salience models than Primary-Instance Regression, which selects a single relevant item from each bag.

  14. Spontaneous regression of a conjunctival naevus.

    PubMed

    Haldar, Shreya; Leyland, Martin

    2016-01-01

    Conjunctival naevi are one of the most common lesions affecting the conjunctiva. While benign in the vast majority of cases, the risk of malignant transformation necessitates regular follow-up. They are well known to increase in size; however, we present the first photo-documented case of spontaneous regression of conjunctival naevus. In most cases, surgical excision is performed due to the clinician's concerns over malignancy. However, a substantial proportion of patients request excision. Highlighting the potential for regression of the lesion is important to ensure patients make an informed decision when contemplating such surgery. PMID:27581234

  15. Removing Malmquist bias from linear regressions

    NASA Technical Reports Server (NTRS)

    Verter, Frances

    1993-01-01

    Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.

  16. Multicollinearity in cross-sectional regressions

    NASA Astrophysics Data System (ADS)

    Lauridsen, Jørgen; Mur, Jesùs

    2006-10-01

    The paper examines robustness of results from cross-sectional regression paying attention to the impact of multicollinearity. It is well known that the reliability of estimators (least-squares or maximum-likelihood) gets worse as the linear relationships between the regressors become more acute. We resolve the discussion in a spatial context, looking closely into the behaviour shown, under several unfavourable conditions, by the most outstanding misspecification tests when collinear variables are added to the regression. A Monte Carlo simulation is performed. The conclusions point to the fact that these statistics react in different ways to the problems posed.

  17. Spontaneous hypnotic age regression: case report.

    PubMed

    Spiegel, D; Rosenfeld, A

    1984-12-01

    Age regression--reliving the past as though it were occurring in the present, with age appropriate vocabulary, mental content, and affect--can occur with instruction in highly hypnotizable individuals, but has rarely been reported to occur spontaneously, especially as a primary symptom. The psychiatric presentation and treatment of a 16-year-old girl with spontaneous age regressions accessible and controllable with hypnosis and psychotherapy are described. Areas of overlap and divergence between this patient's symptoms and those found in patients with hysterical fugue and multiple personality syndrome are also discussed.

  18. Nonlinear Refractive Properties

    NASA Technical Reports Server (NTRS)

    Vikram, Chandra S.; Witherow, William K.

    2001-01-01

    Using nonlinear refractive properties of a salt-water solution at two wavelengths, numerical analysis has been performed to extract temperature and concentration from interferometric fringe data. The theoretical study, using a commercially available equation solving software, starts with critical fringe counting needs and the role of nonlinear refractive properties in such measurements. Finally, methodology of the analysis, codes, fringe counting accuracy needs, etc. is described in detail.

  19. Nonlinear systems in medicine.

    PubMed Central

    Higgins, John P.

    2002-01-01

    Many achievements in medicine have come from applying linear theory to problems. Most current methods of data analysis use linear models, which are based on proportionality between two variables and/or relationships described by linear differential equations. However, nonlinear behavior commonly occurs within human systems due to their complex dynamic nature; this cannot be described adequately by linear models. Nonlinear thinking has grown among physiologists and physicians over the past century, and non-linear system theories are beginning to be applied to assist in interpreting, explaining, and predicting biological phenomena. Chaos theory describes elements manifesting behavior that is extremely sensitive to initial conditions, does not repeat itself and yet is deterministic. Complexity theory goes one step beyond chaos and is attempting to explain complex behavior that emerges within dynamic nonlinear systems. Nonlinear modeling still has not been able to explain all of the complexity present in human systems, and further models still need to be refined and developed. However, nonlinear modeling is helping to explain some system behaviors that linear systems cannot and thus will augment our understanding of the nature of complex dynamic systems within the human body in health and in disease states. PMID:14580107

  20. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  1. Logistic regression when binary predictor variables are highly correlated.

    PubMed

    Barker, L; Brown, C

    Standard logistic regression can produce estimates having large mean square error when predictor variables are multicollinear. Ridge regression and principal components regression can reduce the impact of multicollinearity in ordinary least squares regression. Generalizations of these, applicable in the logistic regression framework, are alternatives to standard logistic regression. It is shown that estimates obtained via ridge and principal components logistic regression can have smaller mean square error than estimates obtained through standard logistic regression. Recommendations for choosing among standard, ridge and principal components logistic regression are developed. Published in 2001 by John Wiley & Sons, Ltd.

  2. Bootstrap inference longitudinal semiparametric regression model

    NASA Astrophysics Data System (ADS)

    Pane, Rahmawati; Otok, Bambang Widjanarko; Zain, Ismaini; Budiantara, I. Nyoman

    2016-02-01

    Semiparametric regression contains two components, i.e. parametric and nonparametric component. Semiparametric regression model is represented by yt i=μ (x˜'ti,zt i)+εt i where μ (x˜'ti,zt i)=x˜'tiβ ˜+g (zt i) and yti is response variable. It is assumed to have a linear relationship with the predictor variables x˜'ti=(x1 i 1,x2 i 2,…,xT i r) . Random error εti, i = 1, …, n, t = 1, …, T is normally distributed with zero mean and variance σ2 and g(zti) is a nonparametric component. The results of this study showed that the PLS approach on longitudinal semiparametric regression models obtain estimators β˜^t=[X'H(λ)X]-1X'H(λ )y ˜ and g˜^λ(z )=M (λ )y ˜ . The result also show that bootstrap was valid on longitudinal semiparametric regression model with g^λ(b )(z ) as nonparametric component estimator.

  3. Assessing risk factors for periodontitis using regression

    NASA Astrophysics Data System (ADS)

    Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa

    2013-10-01

    Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.

  4. Nodular fasciitis with degeneration and regression.

    PubMed

    Yanagisawa, Akihiro; Okada, Hideki

    2008-07-01

    Nodular fasciitis is a benign reactive proliferation that is frequently misdiagnosed as a sarcoma. This article describes a case of nodular fasciitis of 6-month duration located in the cheek, which degenerated and spontaneously regressed after biopsy. The nodule was fixed to the zygoma but was free from the overlying skin. The mass was 3.0 cm in diameter and demonstrated high signal intensity on T2-weighted magnetic resonance imaging. A small part of the lesion was biopsied. Pathological and immunohistochemical examinations identified the nodule as nodular fasciitis with myxoid histology. One month after the biopsy, the mass showed decreased signal intensity on T2-weighted images and measured 2.2 cm in size. The signal on T2-weighted images showed time-dependent decreases, and the mass continued to reduce in size throughout the follow-up period. The lesion presented as hypointense to the surrounding muscles on T2-weighted images and was 0.4 cm in size at 2 years of follow-up. This case demonstrates that nodular fasciitis with myxoid histology can change to that with fibrous appearance gradually with time, thus bringing about spontaneous regression. Degeneration may be involved in the spontaneous regression of nodular fasciitis with myxoid appearance. The mechanism of regression, unclarified at present, should be further studied. PMID:18650753

  5. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  6. Prediction of dynamical systems by symbolic regression.

    PubMed

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K; Noack, Bernd R

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast. PMID:27575130

  7. Assumptions of Multiple Regression: Correcting Two Misconceptions

    ERIC Educational Resources Information Center

    Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason

    2013-01-01

    In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…

  8. Commonality Analysis for the Regression Case.

    ERIC Educational Resources Information Center

    Murthy, Kavita

    Commonality analysis is a procedure for decomposing the coefficient of determination (R superscript 2) in multiple regression analyses into the percent of variance in the dependent variable associated with each independent variable uniquely, and the proportion of explained variance associated with the common effects of predictors in various…

  9. Multiple Regression Analysis and Automatic Interaction Detection.

    ERIC Educational Resources Information Center

    Koplyay, Janos B.

    The Automatic Interaction Detector (AID) is discussed as to its usefulness in multiple regression analysis. The algorithm of AID-4 is a reversal of the model building process; it starts with the ultimate restricted model, namely, the whole group as a unit. By a unique splitting process maximizing the between sum of squares for the categories of…

  10. A Spline Regression Model for Latent Variables

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.

    2014-01-01

    Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…

  11. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  12. Using Regression Analysis: A Guided Tour.

    ERIC Educational Resources Information Center

    Shelton, Fred Ames

    1987-01-01

    Discusses the use and interpretation of multiple regression analysis with computer programs and presents a flow chart of the process. A general explanation of the flow chart is provided, followed by an example showing the development of a linear equation which could be used in estimating manufacturing overhead cost. (Author/LRW)

  13. Genetic Programming Transforms in Linear Regression Situations

    NASA Astrophysics Data System (ADS)

    Castillo, Flor; Kordon, Arthur; Villa, Carlos

    The chapter summarizes the use of Genetic Programming (GP) inMultiple Linear Regression (MLR) to address multicollinearity and Lack of Fit (LOF). The basis of the proposed method is applying appropriate input transforms (model respecification) that deal with these issues while preserving the information content of the original variables. The transforms are selected from symbolic regression models with optimal trade-off between accuracy of prediction and expressional complexity, generated by multiobjective Pareto-front GP. The chapter includes a comparative study of the GP-generated transforms with Ridge Regression, a variant of ordinary Multiple Linear Regression, which has been a useful and commonly employed approach for reducing multicollinearity. The advantages of GP-generated model respecification are clearly defined and demonstrated. Some recommendations for transforms selection are given as well. The application benefits of the proposed approach are illustrated with a real industrial application in one of the broadest empirical modeling areas in manufacturing - robust inferential sensors. The chapter contributes to increasing the awareness of the potential of GP in statistical model building by MLR.

  14. The M Word: Multicollinearity in Multiple Regression.

    ERIC Educational Resources Information Center

    Morrow-Howell, Nancy

    1994-01-01

    Notes that existence of substantial correlation between two or more independent variables creates problems of multicollinearity in multiple regression. Discusses multicollinearity problem in social work research in which independent variables are usually intercorrelated. Clarifies problems created by multicollinearity, explains detection of…

  15. Revisiting Regression in Autism: Heller's "Dementia Infantilis"

    ERIC Educational Resources Information Center

    Westphal, Alexander; Schelinski, Stefanie; Volkmar, Fred; Pelphrey, Kevin

    2013-01-01

    Theodor Heller first described a severe regression of adaptive function in normally developing children, something he termed dementia infantilis, over one 100 years ago. Dementia infantilis is most closely related to the modern diagnosis, childhood disintegrative disorder. We translate Heller's paper, Uber Dementia Infantilis, and discuss…

  16. Prediction of dynamical systems by symbolic regression.

    PubMed

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K; Noack, Bernd R

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  17. Design Coding and Interpretation in Multiple Regression.

    ERIC Educational Resources Information Center

    Lunneborg, Clifford E.

    The multiple regression or general linear model (GLM) is a parameter estimation and hypothesis testing model which encompasses and approaches the more familiar fixed effects analysis of variance (ANOVA). The transition from ANOVA to GLM is accomplished, roughly, by coding treatment level or group membership to produce a set of predictor or…

  18. Predicting Social Trust with Binary Logistic Regression

    ERIC Educational Resources Information Center

    Adwere-Boamah, Joseph; Hufstedler, Shirley

    2015-01-01

    This study used binary logistic regression to predict social trust with five demographic variables from a national sample of adult individuals who participated in The General Social Survey (GSS) in 2012. The five predictor variables were respondents' highest degree earned, race, sex, general happiness and the importance of personally assisting…

  19. Code System to Calculate Correlation & Regression Coefficients.

    1999-11-23

    Version 00 PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model.

  20. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination.

  1. Logistic models--an odd(s) kind of regression.

    PubMed

    Jupiter, Daniel C

    2013-01-01

    The logistic regression model bears some similarity to the multivariable linear regression with which we are familiar. However, the differences are great enough to warrant a discussion of the need for and interpretation of logistic regression.

  2. Embedded Sensors for Measuring Surface Regression

    NASA Technical Reports Server (NTRS)

    Gramer, Daniel J.; Taagen, Thomas J.; Vermaak, Anton G.

    2006-01-01

    The development and evaluation of new hybrid and solid rocket motors requires accurate characterization of the propellant surface regression as a function of key operational parameters. These characteristics establish the propellant flow rate and are prime design drivers affecting the propulsion system geometry, size, and overall performance. There is a similar need for the development of advanced ablative materials, and the use of conventional ablatives exposed to new operational environments. The Miniature Surface Regression Sensor (MSRS) was developed to serve these applications. It is designed to be cast or embedded in the material of interest and regresses along with it. During this process, the resistance of the sensor is related to its instantaneous length, allowing the real-time thickness of the host material to be established. The time derivative of this data reveals the instantaneous surface regression rate. The MSRS could also be adapted to perform similar measurements for a variety of other host materials when it is desired to monitor thicknesses and/or regression rate for purposes of safety, operational control, or research. For example, the sensor could be used to monitor the thicknesses of brake linings or racecar tires and indicate when they need to be replaced. At the time of this reporting, over 200 of these sensors have been installed into a variety of host materials. An MSRS can be made in either of two configurations, denoted ladder and continuous (see Figure 1). A ladder MSRS includes two highly electrically conductive legs, across which narrow strips of electrically resistive material are placed at small increments of length. These strips resemble the rungs of a ladder and are electrically equivalent to many tiny resistors connected in parallel. A substrate material provides structural support for the legs and rungs. The instantaneous sensor resistance is read by an external signal conditioner via wires attached to the conductive legs on the

  3. Regression analyses of stock-recruitment relationships in three fish populations. [Morone saxatilis; Brevoortia tyrannus; Alosa sapidissima

    SciTech Connect

    Yoshiyama, R.M.; Van Winkle, W.; Kirk, B.L.; Stevens, D.E.

    1981-06-01

    The statistical dependence of recruitment level upon stock size and selected environmental variables was examined for three fish stocks: California striped bass (Morone saxatilis), Atlantic menhaden (Brevoortia tyrannus), and American shad (Alosa sapidissima). The analysis involved: (1) single and multiple linear regressions of recruitment against stock size and environmental variables; (b) nonlinear regressions of recruitment against stock size using unmodified Ricker and Beverton-Holt stock-recruitment models, followed by linear regression of residuals on environmental variables; and (c) nonlinear regressions using Ricker and Beverton-Holt models modified to include an environmental variable. The relative effectiveness of these three regression approaches in describing variation in recruitment level of the three fish stocks was evaluated, with effectiveness of regression models gauged by the magnitude of residual mean square values and by whether or not regression models reduced to simpler forms (due to parameter estimates not significantly different from 0.0) after being fitted to data. No single regression approach was consistently superior to the others in explaining variation in recruitment for all three fish stocks. Linear models appeared more effective than the other two regression approaches for striped bass, while modified stock-recruitment models showed the best fit to data for Atlantic menhaden and American shad. Although detailed aspects of the results may be specific to the analytical procedures and time series of data utilized, general features are still evident. Striped bass and Atlantic menhaden recruitment showed stronger statistical relationships to environmental variables than to stock size, whereas stock size apparently has been an important determinant of recruitment variation in American shad. 24 refs., 4 figs., 4 tabs.

  4. Progression and regression of the atherosclerotic plaque.

    PubMed

    de Feyter, P J; Vos, J; Deckers, J W

    1995-08-01

    In animals in which atherosclerosis was induced experimentally (by a high cholesterol diet) regression of the atherosclerotic lesion was demonstrated after serum cholesterol was reduced by cholesterol- lowering drugs or a low-fat diet. Whether regression of advanced coronary arterly lesions also takes place in humans after a similar intervention remains conjectural. However, several randomized studies, primarily employing lipid-lowering intervention or comprehensive changes in lifestyle, have demonstrated, using serial angiograms, that it is possible to achieve less progression, arrest or even (small) regression of atherosclerotic lesions. The lipid-lowering trials (NHBLI, CLAS, POSCH, FATS, SCOR and STARS) studied 1240 symptomatic patients, mostly men, with moderately elevated cholesterol levels and moderately severe angiographic-proven coronary artery disease. A variety of lipid-lowering drugs, in addition to a diet, were used over an intervention period ranging from 2 to 3 years. In all but one study (NHBLI), the progression of coronary atherosclerosis was less in the treated group, but regression was induced in only a few patients. The overall relative risk of progression of coronary atherosclerosis was 0 x 62 and 2 x 13, respectively. The induced angiographic differences were small and did not produce any significant haemodynamic benefit. The most important result was tht the disease process could be stabilized in the majority of patients. Three comprehensive lifestyle change trials (the Lifestyle Heart study, STARS and the Heidelberg Study) studied 183 patients, who were subjected to stress management, and/or intensive exercise, in addition to a low fat diet, over a period ranging from 1 to 3 years. All three trials demonstrated less progression, and more regression with overall relative risks of 0 x 40 and 2 x 35 respectively, in the intervention groups. Angiographic trials demonstrated that retardation or arrest of coronary atherosclerosis was possible

  5. Nanodispersion, nonlinear image filtering, and materials classification

    NASA Astrophysics Data System (ADS)

    Crosta, Giovanni F.; Lee, Jun S.

    2011-06-01

    Polyethylene terephthalate-alumina nano-composites from two production processes gave rise to materials H and T, further divided into four and, respectively, three classes of belonging. Electron microscope images of the materials had been visually scored by an expert in terms of an index, β, aimed at assessing filler dispersion and distribution. These properties characterize the nano-composite. Herewith a classification algorithm which includes image spatial differentiation and non-linear filtering interlaced with multivariate statistics is applied to the same images of materials Hand T. The classification algorithm depends on a few parameters, which are automatically determined by maximizing a figure of merit in the supervised training stage. The classifier output is a display on the plane of the first two principal components. By regressing the 1st principal component affinely against β a remarkable agreement is found between automated classification and visual scoring of material H. The regression result for materialT is not significant, because the assigned classes reduce from 3 to 2, both by visual and automated scoring. The output from the non-linear image filter can be related to filler dispersion and distribution.

  6. Representation of exposures in regression analysis and interpretation of regression coefficients: basic concepts and pitfalls.

    PubMed

    Leffondré, Karen; Jager, Kitty J; Boucquemont, Julie; Stel, Vianda S; Heinze, Georg

    2014-10-01

    Regression models are being used to quantify the effect of an exposure on an outcome, while adjusting for potential confounders. While the type of regression model to be used is determined by the nature of the outcome variable, e.g. linear regression has to be applied for continuous outcome variables, all regression models can handle any kind of exposure variables. However, some fundamentals of representation of the exposure in a regression model and also some potential pitfalls have to be kept in mind in order to obtain meaningful interpretation of results. The objective of this educational paper was to illustrate these fundamentals and pitfalls, using various multiple regression models applied to data from a hypothetical cohort of 3000 patients with chronic kidney disease. In particular, we illustrate how to represent different types of exposure variables (binary, categorical with two or more categories and continuous), and how to interpret the regression coefficients in linear, logistic and Cox models. We also discuss the linearity assumption in these models, and show how wrongly assuming linearity may produce biased results and how flexible modelling using spline functions may provide better estimates.

  7. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  8. Using Edge Voxel Information to Improve Motion Regression for rs-fMRI Connectivity Studies.

    PubMed

    Patriat, Rémi; Molloy, Erin K; Birn, Rasmus M

    2015-11-01

    Recent fMRI studies have outlined the critical impact of in-scanner head motion, particularly on estimates of functional connectivity. Common strategies to reduce the influence of motion include realignment as well as the inclusion of nuisance regressors, such as the 6 realignment parameters, their first derivatives, time-shifted versions of the realignment parameters, and the squared parameters. However, these regressors have limited success at noise reduction. We hypothesized that using nuisance regressors consisting of the principal components (PCs) of edge voxel time series would be better able to capture slice-specific and nonlinear signal changes, thus explaining more variance, improving data quality (i.e., lower DVARS and temporal SNR), and reducing the effect of motion on default-mode network connectivity. Functional MRI data from 22 healthy adult subjects were preprocessed using typical motion regression approaches as well as nuisance regression derived from edge voxel time courses. Results were evaluated in the presence and absence of both global signal regression and motion censoring. Nuisance regressors derived from signal intensity time courses at the edge of the brain significantly improved motion correction compared to using only the realignment parameters and their derivatives. Of the models tested, only the edge voxel regression models were able to eliminate significant differences in default-mode network connectivity between high- and low-motion subjects regardless of the use of global signal regression or censoring.

  9. Nonlinear optomechanics with graphene

    NASA Astrophysics Data System (ADS)

    Shaffer, Airlia; Patil, Yogesh Sharad; Cheung, Hil F. H.; Wang, Ke; Vengalattore, Mukund

    2016-05-01

    To date, studies of cavity optomechanics have been limited to exploiting the linear interactions between the light and mechanics. However, investigations of quantum signal transduction, quantum enhanced metrology and manybody physics with optomechanics each require strong, nonlinear interactions. Graphene nanomembranes are an exciting prospect for realizing such studies due to their inherently nonlinear nature and low mass. We fabricate large graphene nanomembranes and study their mechanical and optical properties. By using dark ground imaging techniques, we correlate their eigenmode shapes with the measured dissipation. We study their hysteretic response present even at low driving amplitudes, and their nonlinear dissipation. Finally, we discuss ongoing efforts to use these resonators for studies of quantum optomechanics and force sensing. This work is supported by the DARPA QuASAR program through a Grant from the ARO.

  10. Competing risks regression for clustered data.

    PubMed

    Zhou, Bingqing; Fine, Jason; Latouche, Aurelien; Labopin, Myriam

    2012-07-01

    A population average regression model is proposed to assess the marginal effects of covariates on the cumulative incidence function when there is dependence across individuals within a cluster in the competing risks setting. This method extends the Fine-Gray proportional hazards model for the subdistribution to situations, where individuals within a cluster may be correlated due to unobserved shared factors. Estimators of the regression parameters in the marginal model are developed under an independence working assumption where the correlation across individuals within a cluster is completely unspecified. The estimators are consistent and asymptotically normal, and variance estimation may be achieved without specifying the form of the dependence across individuals. A simulation study evidences that the inferential procedures perform well with realistic sample sizes. The practical utility of the methods is illustrated with data from the European Bone Marrow Transplant Registry.

  11. Competing risks regression for stratified data.

    PubMed

    Zhou, Bingqing; Latouche, Aurelien; Rocha, Vanderson; Fine, Jason

    2011-06-01

    For competing risks data, the Fine-Gray proportional hazards model for subdistribution has gained popularity for its convenience in directly assessing the effect of covariates on the cumulative incidence function. However, in many important applications, proportional hazards may not be satisfied, including multicenter clinical trials, where the baseline subdistribution hazards may not be common due to varying patient populations. In this article, we consider a stratified competing risks regression, to allow the baseline hazard to vary across levels of the stratification covariate. According to the relative size of the number of strata and strata sizes, two stratification regimes are considered. Using partial likelihood and weighting techniques, we obtain consistent estimators of regression parameters. The corresponding asymptotic properties and resulting inferences are provided for the two regimes separately. Data from a breast cancer clinical trial and from a bone marrow transplantation registry illustrate the potential utility of the stratified Fine-Gray model.

  12. Emptiness as defense in severe regressive states.

    PubMed

    LaFarge, L

    1989-01-01

    This paper examines the empty states experienced by severely ill borderline patients. At times of stressful regression, these patients use complaints of emptiness to describe profound disturbances of affect, cognition, object relations, and bodily experience. Empty states may be seen as complex defensive configurations which protect a borderline level of psychic structure from the impact of aggressively charged object relations, and ward off further regression to states of fragmentation or fusion. Severely ill borderline patients consolidate an empty screen by means of a characteristic repertoire of primitive defenses consisting of various forms of projective identification, including bitriangulation and projective identification of psychic agencies, somatization, acting out, and specific alterations in cognition. The author describes the highly deviant organizations of the object world seen in empty states, and the complex and disturbing countertransferences which these states evoke.

  13. Are increases in cigarette taxation regressive?

    PubMed

    Borren, P; Sutton, M

    1992-12-01

    Using the latest published data from Tobacco Advisory Council surveys, this paper re-evaluates the question of whether or not increases in cigarette taxation are regressive in the United Kingdom. The extended data set shows no evidence of increasing price-elasticity by social class as found in a major previous study. To the contrary, there appears to be no clear pattern in the price responsiveness of smoking behaviour across different social classes. Increases in cigarette taxation, while reducing smoking levels in all groups, fall most heavily on men and women in the lowest social class. Men and women in social class five can expect to pay eight and eleven times more of a tax increase respectively, than their social class one counterparts. Taken as a proportion of relative incomes, the regressive nature of increases in cigarette taxation is even more pronounced.

  14. Forecasting volatility with neural regression: a contribution to model adequacy.

    PubMed

    Refenes, A N; Holt, W T

    2001-01-01

    Neural nets' usefulness for forecasting is limited by problems of overfitting and the lack of rigorous procedures for model identification, selection and adequacy testing. This paper describes a methodology for neural model misspecification testing. We introduce a generalization of the Durbin-Watson statistic for neural regression and discuss the general issues of misspecification testing using residual analysis. We derive a generalized influence matrix for neural estimators which enables us to evaluate the distribution of the statistic. We deploy Monte Carlo simulation to compare the power of the test for neural and linear regressors. While residual testing is not a sufficient condition for model adequacy, it is nevertheless a necessary condition to demonstrate that the model is a good approximation to the data generating process, particularly as neural-network estimation procedures are susceptible to partial convergence. The work is also an important step toward developing rigorous procedures for neural model identification, selection and adequacy testing which have started to appear in the literature. We demonstrate its applicability in the nontrivial problem of forecasting implied volatility innovations using high-frequency stock index options. Each step of the model building process is validated using statistical tests to verify variable significance and model adequacy with the results confirming the presence of nonlinear relationships in implied volatility innovations.

  15. Multiple regression analyses in the prediction of aerospace instrument costs

    NASA Astrophysics Data System (ADS)

    Tran, Linh

    The aerospace industry has been investing for decades in ways to improve its efficiency in estimating the project life cycle cost (LCC). One of the major focuses in the LCC is the cost/prediction of aerospace instruments done during the early conceptual design phase of the project. The accuracy of early cost predictions affects the project scheduling and funding, and it is often the major cause for project cost overruns. The prediction of instruments' cost is based on the statistical analysis of these independent variables: Mass (kg), Power (watts), Instrument Type, Technology Readiness Level (TRL), Destination: earth orbiting or planetary, Data rates (kbps), Number of bands, Number of channels, Design life (months), and Development duration (months). This author is proposing a cost prediction approach of aerospace instruments based on these statistical analyses: Clustering Analysis, Principle Components Analysis (PCA), Bootstrap, and multiple regressions (both linear and non-linear). In the proposed approach, the Cost Estimating Relationship (CER) will be developed for the dependent variable Instrument Cost by using a combination of multiple independent variables. "The Full Model" will be developed and executed to estimate the full set of nine variables. The SAS program, Excel, Automatic Cost Estimating Integrate Tool (ACEIT) and Minitab are the tools to aid the analysis. Through the analysis, the cost drivers will be identified which will help develop an ultimate cost estimating software tool for the Instrument Cost prediction and optimization of future missions.

  16. Modeling confounding by half-sibling regression.

    PubMed

    Schölkopf, Bernhard; Hogg, David W; Wang, Dun; Foreman-Mackey, Daniel; Janzing, Dominik; Simon-Gabriel, Carl-Johann; Peters, Jonas

    2016-07-01

    We describe a method for removing the effect of confounders to reconstruct a latent quantity of interest. The method, referred to as "half-sibling regression," is inspired by recent work in causal inference using additive noise models. We provide a theoretical justification, discussing both independent and identically distributed as well as time series data, respectively, and illustrate the potential of the method in a challenging astronomy application. PMID:27382154

  17. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  18. Modeling confounding by half-sibling regression.

    PubMed

    Schölkopf, Bernhard; Hogg, David W; Wang, Dun; Foreman-Mackey, Daniel; Janzing, Dominik; Simon-Gabriel, Carl-Johann; Peters, Jonas

    2016-07-01

    We describe a method for removing the effect of confounders to reconstruct a latent quantity of interest. The method, referred to as "half-sibling regression," is inspired by recent work in causal inference using additive noise models. We provide a theoretical justification, discussing both independent and identically distributed as well as time series data, respectively, and illustrate the potential of the method in a challenging astronomy application.

  19. Modeling confounding by half-sibling regression

    PubMed Central

    Schölkopf, Bernhard; Hogg, David W.; Wang, Dun; Foreman-Mackey, Daniel; Janzing, Dominik; Simon-Gabriel, Carl-Johann; Peters, Jonas

    2016-01-01

    We describe a method for removing the effect of confounders to reconstruct a latent quantity of interest. The method, referred to as “half-sibling regression,” is inspired by recent work in causal inference using additive noise models. We provide a theoretical justification, discussing both independent and identically distributed as well as time series data, respectively, and illustrate the potential of the method in a challenging astronomy application. PMID:27382154

  20. Realization of Ridge Regression in MATLAB

    NASA Astrophysics Data System (ADS)

    Dimitrov, S.; Kovacheva, S.; Prodanova, K.

    2008-10-01

    The least square estimator (LSE) of the coefficients in the classical linear regression models is unbiased. In the case of multicollinearity of the vectors of design matrix, LSE has very big variance, i.e., the estimator is unstable. A more stable estimator (but biased) can be constructed using ridge-estimator (RE). In this paper the basic methods of obtaining of Ridge-estimators and numerical procedures of its realization in MATLAB are considered. An application to Pharmacokinetics problem is considered.

  1. Transfer Learning Based on Logistic Regression

    NASA Astrophysics Data System (ADS)

    Paul, A.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.

  2. Face Alignment via Regressing Local Binary Features.

    PubMed

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  3. Satellite rainfall retrieval by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.

    1986-01-01

    The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.

  4. General Regression and Representation Model for Classification

    PubMed Central

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2014-01-01

    Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms. PMID:25531882

  5. Engineered nonlinear lattices.

    PubMed

    Clausen, C B; Christiansen, P L; Torner, L; Gaididei, Y B

    1999-11-01

    We show that with the quasi-phase-matching technique it is possible to fabricate stripes of nonlinearity that trap and guide light like waveguides. We investigate an array of such stripes and find that when the stripes are sufficiently narrow, the beam dynamics is governed by a quadratic nonlinear discrete equation. The proposed structure therefore provides an experimental setting for exploring discrete effects in a controlled manner. In particular, we show propagation of breathers that are eventually trapped by discreteness. When the stripes are wide the beams evolve in a structure we term a quasilattice, which interpolates between a lattice system and a continuous system. PMID:11970457

  6. Nonlinear aerodynamic wing design

    NASA Technical Reports Server (NTRS)

    Bonner, Ellwood

    1985-01-01

    The applicability of new nonlinear theoretical techniques is demonstrated for supersonic wing design. The new technology was utilized to define outboard panels for an existing advanced tactical fighter model. Mach 1.6 maneuver point design and multi-operating point compromise surfaces were developed and tested. High aerodynamic efficiency was achieved at the design conditions. A corollary result was that only modest supersonic penalties were incurred to meet multiple aerodynamic requirements. The nonlinear potential analysis of a practical configuration arrangement correlated well with experimental data.

  7. Nonlinear trajectory navigation

    NASA Astrophysics Data System (ADS)

    Park, Sang H.

    Trajectory navigation entails the solution of many different problems that arise due to uncertain knowledge of the spacecraft state, including orbit prediction, correction maneuver design, and trajectory estimation. In practice, these problems are usually solved based on an assumption that linear dynamical models sufficiently approximate the local trajectory dynamics and their associated statistics. However, astrodynamics problems are nonlinear in general and linear spacecraft dynamics models can fail to characterize the true trajectory dynamics when the system is subject to a highly unstable environment or when mapped over a long time period. This limits the performance of traditional navigation techniques and can make it difficult to perform precision analysis or robust navigation. This dissertation presents an alternate method for spacecraft trajectory navigation based on a nonlinear local trajectory model and their statistics in an analytic framework. For a given reference trajectory, we first solve for the higher order Taylor series terms that describe the localized nonlinear motion and develop an analytic expression for the relative solution flow. We then discuss the nonlinear dynamical mapping of a spacecraft's probability density function by solving the Fokker-Planck equation for a deterministic system. From this result we derive an analytic method for orbit uncertainty propagation which can replicate Monte-Carlo simulations with the benefit of added flexibility in initial orbit statistics. Using this approach, we introduce the concept of the statistically correct trajectory where we directly incorporate statistical information about an orbit state into the trajectory design process. As an extension of this concept, we define a nonlinear statistical targeting method where we solve for a correction maneuver which intercepts the desired target on average. Then we apply our results to a Bayesian filtering problem to obtain a general filtering algorithm for

  8. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    PubMed

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods. PMID:25358108

  9. Quantile regression modeling for Malaysian automobile insurance premium data

    NASA Astrophysics Data System (ADS)

    Fuzi, Mohd Fadzli Mohd; Ismail, Noriszura; Jemain, Abd Aziz

    2015-09-01

    Quantile regression is a robust regression to outliers compared to mean regression models. Traditional mean regression models like Generalized Linear Model (GLM) are not able to capture the entire distribution of premium data. In this paper we demonstrate how a quantile regression approach can be used to model net premium data to study the effects of change in the estimates of regression parameters (rating classes) on the magnitude of response variable (pure premium). We then compare the results of quantile regression model with Gamma regression model. The results from quantile regression show that some rating classes increase as quantile increases and some decrease with decreasing quantile. Further, we found that the confidence interval of median regression (τ = O.5) is always smaller than Gamma regression in all risk factors.

  10. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  11. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States.

    PubMed

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects' affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain's motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  12. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  13. Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors

    PubMed Central

    Li, Dan; Lin, Lizhen; Dey, Dipak K.

    2015-01-01

    Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  14. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  15. Noise model based ν-support vector regression with its application to short-term wind speed forecasting.

    PubMed

    Hu, Qinghua; Zhang, Shiguang; Xie, Zongxia; Mi, Jusheng; Wan, Jie

    2014-09-01

    Support vector regression (SVR) techniques are aimed at discovering a linear or nonlinear structure hidden in sample data. Most existing regression techniques take the assumption that the error distribution is Gaussian. However, it was observed that the noise in some real-world applications, such as wind power forecasting and direction of the arrival estimation problem, does not satisfy Gaussian distribution, but a beta distribution, Laplacian distribution, or other models. In these cases the current regression techniques are not optimal. According to the Bayesian approach, we derive a general loss function and develop a technique of the uniform model of ν-support vector regression for the general noise model (N-SVR). The Augmented Lagrange Multiplier method is introduced to solve N-SVR. Numerical experiments on artificial data sets, UCI data and short-term wind speed prediction are conducted. The results show the effectiveness of the proposed technique.

  16. Nonlinear silicon photonics

    NASA Astrophysics Data System (ADS)

    Tsia, Kevin K.; Jalali, Bahram

    2010-05-01

    An intriguing optical property of silicon is that it exhibits a large third-order optical nonlinearity, with orders-ofmagnitude larger than that of silica glass in the telecommunication band. This allows efficient nonlinear optical interaction at relatively low power levels in a small footprint. Indeed, we have witnessed a stunning progress in harnessing the Raman and Kerr effects in silicon as the mechanisms for enabling chip-scale optical amplification, lasing, and wavelength conversion - functions that until recently were perceived to be beyond the reach of silicon. With all the continuous efforts developing novel techniques, nonlinear silicon photonics is expected to be able to reach even beyond the prior achievements. Instead of providing a comprehensive overview of this field, this manuscript highlights a number of new branches of nonlinear silicon photonics, which have not been fully recognized in the past. In particular, they are two-photon photovoltaic effect, mid-wave infrared (MWIR) silicon photonics, broadband Raman effects, inverse Raman scattering, and periodically-poled silicon (PePSi). These novel effects and techniques could create a new paradigm for silicon photonics and extend its utility beyond the traditionally anticipated applications.

  17. Intramolecular and nonlinear dynamics

    SciTech Connect

    Davis, M.J.

    1993-12-01

    Research in this program focuses on three interconnected areas. The first involves the study of intramolecular dynamics, particularly of highly excited systems. The second area involves the use of nonlinear dynamics as a tool for the study of molecular dynamics and complex kinetics. The third area is the study of the classical/quantum correspondence for highly excited systems, particularly systems exhibiting classical chaos.

  18. Generalized Nonlinear Yule Models

    NASA Astrophysics Data System (ADS)

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-10-01

    With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.

  19. Nonlinear and Nonideal MHD

    SciTech Connect

    Callen, J. D.

    2002-11-04

    The primary efforts this year have focused on exploring the nonlinear evolution of localized interchange instabilities, some extensions of neoclassical tearing mode theory, and developing a model for the dynamic electrical conductivity in a bumpy cylinder magnetic field. In addition, we have vigorously participated in the computationally-focused NIMROD and CEMM projects.

  20. Universal nonlinear entanglement witnesses

    SciTech Connect

    Kotowski, Marcin; Kotowski, Michal

    2010-06-15

    We give a universal recipe for constructing nonlinear entanglement witnesses able to detect nonclassical correlations in arbitrary systems of distinguishable and/or identical particles for an arbitrary number of constituents. The constructed witnesses are expressed in terms of expectation values of observables. As such, they are, at least in principle, measurable in experiments.

  1. Nonlinear growing neutrino cosmology

    NASA Astrophysics Data System (ADS)

    Ayaita, Youness; Baldi, Marco; Führer, Florian; Puchwein, Ewald; Wetterich, Christof

    2016-03-01

    The energy scale of dark energy, ˜2 ×10-3 eV , is a long way off compared to all known fundamental scales—except for the neutrino masses. If dark energy is dynamical and couples to neutrinos, this is no longer a coincidence. The time at which dark energy starts to behave as an effective cosmological constant can be linked to the time at which the cosmic neutrinos become nonrelativistic. This naturally places the onset of the Universe's accelerated expansion in recent cosmic history, addressing the why-now problem of dark energy. We show that these mechanisms indeed work in the growing neutrino quintessence model—even if the fully nonlinear structure formation and backreaction are taken into account, which were previously suspected of spoiling the cosmological evolution. The attractive force between neutrinos arising from their coupling to dark energy grows as large as 106 times the gravitational strength. This induces very rapid dynamics of neutrino fluctuations which are nonlinear at redshift z ≈2 . Nevertheless, a nonlinear stabilization phenomenon ensures only mildly nonlinear oscillating neutrino overdensities with a large-scale gravitational potential substantially smaller than that of cold dark matter perturbations. Depending on model parameters, the signals of large-scale neutrino lumps may render the cosmic neutrino background observable.

  2. Teaching the Nonlinear Pendulum.

    ERIC Educational Resources Information Center

    Zheng, T. F.; And Others

    1994-01-01

    Emphasizes two aspects for a calculus-based physics course: applying calculus and numerical integral methods to determine the theoretical period of a pendulum with nonlinear motion, and achieving theoretical and experimental results by using "MathCad" software and a microcomputer-based laboratory (MBL) system. (MVL)

  3. Nonlinear Theory and Breakdown

    NASA Technical Reports Server (NTRS)

    Smith, Frank

    2007-01-01

    The main points of recent theoretical and computational studies on boundary-layer transition and turbulence are to be highlighted. The work is based on high Reynolds numbers and attention is drawn to nonlinear interactions, breakdowns and scales. The research focuses in particular on truly nonlinear theories, i.e. those for which the mean-flow profile is completely altered from its original state. There appear to be three such theories dealing with unsteady nonlinear pressure-displacement interactions (I), with vortex/wave interactions (II), and with Euler-scale flows (III). Specific recent findings noted for these three, and in quantitative agreement with experiments, are the following. Nonlinear finite-time break-ups occur in I, leading to sublayer eruption and vortex formation; here the theory agrees with experiments (Nishioka) regarding the first spike. II gives rise to finite-distance blowup of displacement thickness, then interaction and break-up as above; this theory agrees with experiments (Klebanoff, Nishioka) on the formation of three-dimensional streets. III leads to the prediction of turbulent boundary-layer micro-scale, displacement-and stress-sublayer-thicknesses.

  4. Analyzing Historical Count Data: Poisson and Negative Binomial Regression Models.

    ERIC Educational Resources Information Center

    Beck, E. M.; Tolnay, Stewart E.

    1995-01-01

    Asserts that traditional approaches to multivariate analysis, including standard linear regression techniques, ignore the special character of count data. Explicates three suitable alternatives to standard regression techniques, a simple Poisson regression, a modified Poisson regression, and a negative binomial model. (MJP)

  5. The Regression Trunk Approach to Discover Treatment Covariate Interaction

    ERIC Educational Resources Information Center

    Dusseldorp, Elise; Meulman, Jacqueline J.

    2004-01-01

    The regression trunk approach (RTA) is an integration of regression trees and multiple linear regression analysis. In this paper RTA is used to discover treatment covariate interactions, in the regression of one continuous variable on a treatment variable with "multiple" covariates. The performance of RTA is compared to the classical method of…

  6. Nonlinear Pricing in Energy and Environmental Markets

    NASA Astrophysics Data System (ADS)

    Ito, Koichiro

    This dissertation consists of three empirical studies on nonlinear pricing in energy and environmental markets. The first investigates how consumers respond to multi-tier nonlinear price schedules for residential electricity. Chapter 2 asks a similar research question for residential water pricing. Finally, I examine the effect of nonlinear financial rewards for energy conservation by applying a regression discontinuity design to a large-scale electricity rebate program that was implemented in California. Economic theory generally assumes that consumers respond to marginal prices when making economic decisions, but this assumption may not hold for complex price schedules. The chapter "Do Consumers Respond to Marginal or Average Price? Evidence from Nonlinear Electricity Pricing" provides empirical evidence that consumers respond to average price rather than marginal price when faced with nonlinear electricity price schedules. Nonlinear price schedules, such as progressive income tax rates and multi-tier electricity prices, complicate economic decisions by creating multiple marginal prices for the same good. Evidence from laboratory experiments suggests that consumers facing such price schedules may respond to average price as a heuristic. I empirically test this prediction using field data by exploiting price variation across a spatial discontinuity in electric utility service areas. The territory border of two electric utilities lies within several city boundaries in southern California. As a result, nearly identical households experience substantially different nonlinear electricity price schedules. Using monthly household-level panel data from 1999 to 2008, I find strong evidence that consumers respond to average price rather than marginal or expected marginal price. I show that even though this sub-optimizing behavior has a minimal impact on individual welfare, it can critically alter the policy implications of nonlinear pricing. The second chapter " How Do

  7. Nonlinear modelling and control for heart rate response to exercise.

    PubMed

    Zhang, Y; Chen, W; Su, S W; Celler, B

    2012-01-01

    In order to accurately regulate cardiovascular response to exercise for individual exerciser, this study proposes a modelling and control integrated approach based on ε-insensitive Support Vector Regression (SVR) and switching control strategy. Firstly, a control oriented modelling approach is proposed to depict nonlinear behaviours of cardiovascular response at both onset and offset of treadmill exercises by using support vector machine regression. Then, based on the established nonlinear time-variant model, a novel switching Model Predictive Control (MPC) algorithm has been proposed for the optimisation of exercise efforts. The designed controller can take into account both coefficient drifting and parameter jump by embedding the identified model coefficient into the optimiser and adopting switching strategy during the transfer between onset and offset of exercises. The effectiveness of the proposed modelling and control approach was shown from the regulation of dynamical heart rate response to exercise through simulation using MATLAB.

  8. Phase retrieval using nonlinear diversity.

    PubMed

    Lu, Chien-Hung; Barsi, Christopher; Williams, Matthew O; Kutz, J Nathan; Fleischer, Jason W

    2013-04-01

    We extend the Gerchberg-Saxton algorithm to phase retrieval in a nonlinear system. Using a tunable photorefractive crystal, we experimentally demonstrate the noninterferometric technique by reconstructing an unknown phase object from optical intensity measurements taken at different nonlinear strengths.

  9. Multiple linear regression for isotopic measurements

    NASA Astrophysics Data System (ADS)

    Garcia Alonso, J. I.

    2012-04-01

    There are two typical applications of isotopic measurements: the detection of natural variations in isotopic systems and the detection man-made variations using enriched isotopes as indicators. For both type of measurements accurate and precise isotope ratio measurements are required. For the so-called non-traditional stable isotopes, multicollector ICP-MS instruments are usually applied. In many cases, chemical separation procedures are required before accurate isotope measurements can be performed. The off-line separation of Rb and Sr or Nd and Sm is the classical procedure employed to eliminate isobaric interferences before multicollector ICP-MS measurement of Sr and Nd isotope ratios. Also, this procedure allows matrix separation for precise and accurate Sr and Nd isotope ratios to be obtained. In our laboratory we have evaluated the separation of Rb-Sr and Nd-Sm isobars by liquid chromatography and on-line multicollector ICP-MS detection. The combination of this chromatographic procedure with multiple linear regression of the raw chromatographic data resulted in Sr and Nd isotope ratios with precisions and accuracies typical of off-line sample preparation procedures. On the other hand, methods for the labelling of individual organisms (such as a given plant, fish or animal) are required for population studies. We have developed a dual isotope labelling procedure which can be unique for a given individual, can be inherited in living organisms and it is stable. The detection of the isotopic signature is based also on multiple linear regression. The labelling of fish and its detection in otoliths by Laser Ablation ICP-MS will be discussed using trout and salmon as examples. As a conclusion, isotope measurement procedures based on multiple linear regression can be a viable alternative in multicollector ICP-MS measurements.

  10. Mapping geogenic radon potential by regression kriging.

    PubMed

    Pásztor, László; Szabó, Katalin Zsuzsanna; Szatmári, Gábor; Laborczi, Annamária; Horváth, Ákos

    2016-02-15

    Radon ((222)Rn) gas is produced in the radioactive decay chain of uranium ((238)U) which is an element that is naturally present in soils. Radon is transported mainly by diffusion and convection mechanisms through the soil depending mainly on the physical and meteorological parameters of the soil and can enter and accumulate in buildings. Health risks originating from indoor radon concentration can be attributed to natural factors and is characterized by geogenic radon potential (GRP). Identification of areas with high health risks require spatial modeling, that is, mapping of radon risk. In addition to geology and meteorology, physical soil properties play a significant role in the determination of GRP. In order to compile a reliable GRP map for a model area in Central-Hungary, spatial auxiliary information representing GRP forming environmental factors were taken into account to support the spatial inference of the locally measured GRP values. Since the number of measured sites was limited, efficient spatial prediction methodologies were searched for to construct a reliable map for a larger area. Regression kriging (RK) was applied for the interpolation using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly, the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Overall accuracy of the map was tested by Leave-One-Out Cross-Validation. Furthermore the spatial reliability of the resultant map is also estimated by the calculation of the 90% prediction interval of the local prediction values. The applicability of the applied method as well as that of the map is discussed briefly. PMID:26706761

  11. Monthly streamflow forecasting using Gaussian Process Regression

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Wang, Dingbao; Xu, Xianli

    2014-04-01

    Streamflow forecasting plays a critical role in nearly all aspects of water resources planning and management. In this work, Gaussian Process Regression (GPR), an effective kernel-based machine learning algorithm, is applied to probabilistic streamflow forecasting. GPR is built on Gaussian process, which is a stochastic process that generalizes multivariate Gaussian distribution to infinite-dimensional space such that distributions over function values can be defined. The GPR algorithm provides a tractable and flexible hierarchical Bayesian framework for inferring the posterior distribution of streamflows. The prediction skill of the algorithm is tested for one-month-ahead prediction using the MOPEX database, which includes long-term hydrometeorological time series collected from 438 basins across the U.S. from 1948 to 2003. Comparisons with linear regression and artificial neural network models indicate that GPR outperforms both regression methods in most cases. The GPR prediction of MOPEX basins is further examined using the Budyko framework, which helps to reveal the close relationships among water-energy partitions, hydrologic similarity, and predictability. Flow regime modification and the resulting loss of predictability have been a major concern in recent years because of climate change and anthropogenic activities. The persistence of streamflow predictability is thus examined by extending the original MOPEX data records to 2012. Results indicate relatively strong persistence of streamflow predictability in the extended period, although the low-predictability basins tend to show more variations. Because many low-predictability basins are located in regions experiencing fast growth of human activities, the significance of sustainable development and water resources management can be even greater for those regions.

  12. Mapping geogenic radon potential by regression kriging.

    PubMed

    Pásztor, László; Szabó, Katalin Zsuzsanna; Szatmári, Gábor; Laborczi, Annamária; Horváth, Ákos

    2016-02-15

    Radon ((222)Rn) gas is produced in the radioactive decay chain of uranium ((238)U) which is an element that is naturally present in soils. Radon is transported mainly by diffusion and convection mechanisms through the soil depending mainly on the physical and meteorological parameters of the soil and can enter and accumulate in buildings. Health risks originating from indoor radon concentration can be attributed to natural factors and is characterized by geogenic radon potential (GRP). Identification of areas with high health risks require spatial modeling, that is, mapping of radon risk. In addition to geology and meteorology, physical soil properties play a significant role in the determination of GRP. In order to compile a reliable GRP map for a model area in Central-Hungary, spatial auxiliary information representing GRP forming environmental factors were taken into account to support the spatial inference of the locally measured GRP values. Since the number of measured sites was limited, efficient spatial prediction methodologies were searched for to construct a reliable map for a larger area. Regression kriging (RK) was applied for the interpolation using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly, the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Overall accuracy of the map was tested by Leave-One-Out Cross-Validation. Furthermore the spatial reliability of the resultant map is also estimated by the calculation of the 90% prediction interval of the local prediction values. The applicability of the applied method as well as that of the map is discussed briefly.

  13. Convex Regression with Interpretable Sharp Partitions

    PubMed Central

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set.

  14. SPE dose prediction using locally weighted regression.

    PubMed

    Hines, J W; Townsend, L W; Nichols, T F

    2005-01-01

    When astronauts are outside Earth's protective magnetosphere, they are subject to large radiation doses resulting from solar particle events. The total dose received from a major solar particle event in deep space could cause severe radiation poisoning. The dose is usually received over a 20-40 h time interval but the event's effects may be reduced with an early warning system. This paper presents a method to predict the total dose early in the event. It uses a locally weighted regression model, which is easier to train, and provides predictions as accurate as the neural network models that were used previously. PMID:16604613

  15. An operational GLS model for hydrologic regression

    USGS Publications Warehouse

    Tasker, Gary D.; Stedinger, J.R.

    1989-01-01

    Recent Monte Carlo studies have documented the value of generalized least squares (GLS) procedures to estimate empirical relationships between streamflow statistics and physiographic basin characteristics. This paper presents a number of extensions of the GLS method that deal with realities and complexities of regional hydrologic data sets that were not addressed in the simulation studies. These extensions include: (1) a more realistic model of the underlying model errors; (2) smoothed estimates of cross correlation of flows; (3) procedures for including historical flow data; (4) diagnostic statistics describing leverage and influence for GLS regression; and (5) the formulation of a mathematical program for evaluating future gaging activities. ?? 1989.

  16. Convex Regression with Interpretable Sharp Partitions

    PubMed Central

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120

  17. Triton,... electron,... cosmon,...: An infinite regression?

    PubMed

    Dehmelt, H

    1989-11-01

    I propose an elementary particle model in which the simplest near-Dirac particles triton, proton, and electron are members of the three top layers of a bottomless stack. Each particle is a composite of three particles from the next layer below in an infinite regression approaching Dirac point particles. The cosmon, an immensely heavy lower layer subquark, is the elementary particle. The world-atom, a tightly bound cosmon/anticosmon pair of zero relativistic total mass, arose from the nothing state in a quantum jump. Rapid decay of the pair launched the big bang and created the universe. PMID:16594084

  18. Significant Scoliosis Regression following Syringomyelia Decompression

    PubMed Central

    Mollano, Anthony V; Weinstein, Stuart L; Menezes, Arnold H

    2005-01-01

    We present the case of a 5-year-old boy presenting with a 54-degree scoliosis secondary to a Chiari I malformation with a holocord syringomyelia extending from C1 to T10. Neurosurgical treatment involved posterior fossa craniectomy with decompression, and partial C1 laminectomy. At follow-up 7 years later, at age 12, radiographs revealed only a 4-degree scoliosis, and follow-up MRI revealed a deflated syrinx. We report this case to reveal the most significant scoliosis regression seen in our experience that may occur in younger patients after neurosurgical syringomyelia decompression for Chiari I hindbrain herniation. PMID:16089074

  19. Regression analysis of growth responses to water depth in three wetland plant species

    PubMed Central

    Sorrell, Brian K.; Tanner, Chris C.; Brix, Hans

    2012-01-01

    Background and aims Plant species composition in wetlands and on lakeshores often shows dramatic zonation, which is frequently ascribed to differences in flooding tolerance. This study compared the growth responses to water depth of three species (Phormium tenax, Carex secta and Typha orientalis) differing in depth preferences in wetlands, using non-linear and quantile regression analyses to establish how flooding tolerance can explain field zonation. Methodology Plants were established for 8 months in outdoor cultures in waterlogged soil without standing water, and then randomly allocated to water depths from 0 to 0.5 m. Morphological and growth responses to depth were followed for 54 days before harvest, and then analysed by repeated-measures analysis of covariance, and non-linear and quantile regression analysis (QRA), to compare flooding tolerances. Principal results Growth responses to depth differed between the three species, and were non-linear. Phormium tenax growth decreased rapidly in standing water >0.25 m depth, C. secta growth increased initially with depth but then decreased at depths >0.30 m, accompanied by increased shoot height and decreased shoot density, and T. orientalis was unaffected by the 0- to 0.50-m depth range. In P. tenax the decrease in growth was associated with a decrease in the number of leaves produced per ramet and in C. secta the effect of water depth was greatest for the tallest shoots. Allocation patterns were unaffected by depth. Conclusions The responses are consistent with the principle that zonation in the field is primarily structured by competition in shallow water and by physiological flooding tolerance in deep water. Regression analyses, especially QRA, proved to be powerful tools in distinguishing genuine phenotypic responses to water depth from non-phenotypic variation due to size and developmental differences. PMID:23259044

  20. Cubication of Conservative Nonlinear Oscillators

    ERIC Educational Resources Information Center

    Belendez, Augusto; Alvarez, Mariela L.; Fernandez, Elena; Pascual, Immaculada

    2009-01-01

    A cubication procedure of the nonlinear differential equation for conservative nonlinear oscillators is analysed and discussed. This scheme is based on the Chebyshev series expansion of the restoring force, and this allows us to approximate the original nonlinear differential equation by a Duffing equation in which the coefficients for the linear…

  1. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is

  2. An Investigation of Sleep Characteristics, EEG Abnormalities and Epilepsy in Developmentally Regressed and Non-Regressed Children with Autism

    ERIC Educational Resources Information Center

    Giannotti, Flavia; Cortesi, Flavia; Cerquiglini, Antonella; Miraglia, Daniela; Vagnoni, Cristina; Sebastiani, Teresa; Bernabei, Paola

    2008-01-01

    This study investigated sleep of children with autism and developmental regression and the possible relationship with epilepsy and epileptiform abnormalities. Participants were 104 children with autism (70 non-regressed, 34 regressed) and 162 typically developing children (TD). Results suggested that the regressed group had higher incidence of…

  3. Probabilistic seismic demand analysis of nonlinear structures

    NASA Astrophysics Data System (ADS)

    Shome, Nilesh

    Recent earthquakes in California have initiated improvement in current design philosophy and at present the civil engineering community is working towards development of performance-based earthquake engineering of structures. The objective of this study is to develop efficient, but accurate procedures for probabilistic analysis of nonlinear seismic behavior of structures. The proposed procedures help the near-term development of seismic-building assessments which require an estimation of seismic demand at a given intensity level. We also develop procedures to estimate the probability of exceedance of any specified nonlinear response level due to future ground motions at a specific site. This is referred as Probabilistic Seismic Demand Analysis (PSDA). The latter procedure prepares the way for the next stage development of seismic assessment that consider the uncertainties in nonlinear response and capacity. The proposed procedures require structure-specific nonlinear analyses for a relatively small set of recorded accelerograms and (site-specific or USGS-map-like) seismic hazard analyses. We have addressed some of the important issues of nonlinear seismic demand analysis, which are selection of records for structural analysis, the number of records to be used, scaling of records, etc. Initially these issues are studied through nonlinear analysis of structures for a number of magnitude-distance bins of records. Subsequently we introduce regression analysis of response results against spectral acceleration, magnitude, duration, etc., which helps to resolve these issues more systematically. We illustrate the demand-hazard calculations through two major example problems: a 5story and a 20-story SMRF building. Several simple, but quite accurate closed-form solutions have also been proposed to expedite the demand-hazard calculations. We find that vector-valued (e.g., 2-D) PSDA estimates demand hazard more accurately. This procedure, however, requires information about 2

  4. Shape regression for vertebra fracture quantification

    NASA Astrophysics Data System (ADS)

    Lund, Michael Tillge; de Bruijne, Marleen; Tanko, Laszlo B.; Nielsen, Mads

    2005-04-01

    Accurate and reliable identification and quantification of vertebral fractures constitute a challenge both in clinical trials and in diagnosis of osteoporosis. Various efforts have been made to develop reliable, objective, and reproducible methods for assessing vertebral fractures, but at present there is no consensus concerning a universally accepted diagnostic definition of vertebral fractures. In this project we want to investigate whether or not it is possible to accurately reconstruct the shape of a normal vertebra, using a neighbouring vertebra as prior information. The reconstructed shape can then be used to develop a novel vertebra fracture measure, by comparing the segmented vertebra shape with its reconstructed normal shape. The vertebrae in lateral x-rays of the lumbar spine were manually annotated by a medical expert. With this dataset we built a shape model, with equidistant point distribution between the four corner points. Based on the shape model, a multiple linear regression model of a normal vertebra shape was developed for each dataset using leave-one-out cross-validation. The reconstructed shape was calculated for each dataset using these regression models. The average prediction error for the annotated shape was on average 3%.

  5. A Gibbs sampler for multivariate linear regression

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2016-04-01

    Kelly described an efficient algorithm, using Gibbs sampling, for performing linear regression in the fairly general case where non-zero measurement errors exist for both the covariates and response variables, where these measurements may be correlated (for the same data point), where the response variable is affected by intrinsic scatter in addition to measurement error, and where the prior distribution of covariates is modelled by a flexible mixture of Gaussians rather than assumed to be uniform. Here, I extend the Kelly algorithm in two ways. First, the procedure is generalized to the case of multiple response variables. Secondly, I describe how to model the prior distribution of covariates using a Dirichlet process, which can be thought of as a Gaussian mixture where the number of mixture components is learned from the data. I present an example of multivariate regression using the extended algorithm, namely fitting scaling relations of the gas mass, temperature, and luminosity of dynamically relaxed galaxy clusters as a function of their mass and redshift. An implementation of the Gibbs sampler in the R language, called LRGS, is provided.

  6. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  7. Regression Analysis Of Zernike Polynomials Part II

    NASA Astrophysics Data System (ADS)

    Grey, Louis D.

    1989-01-01

    In an earlier paper entitled "Regression Analysis of Zernike Polynomials, Proceedings of SPIE, Vol. 18, pp. 392-398, the least squares fitting process of Zernike polynomials was examined from the point of view of linear statistical regression theory. Among the topics discussed were measures for determining how good the fit was, tests for the underlying assumptions of normality and constant variance, the treatment of outliers, the analysis of residuals and the computation of confidence intervals for the coefficients. The present paper is a continuation of the earlier paper and concerns applications of relatively new advances in certain areas of statistical theory made possible by the advent of the high speed computer. Among these are: 1. Jackknife - A technique for improving the accuracy of any statistical estimate. 2. Bootstrap - Increasing the accuracy of an estimate by generating new samples of data from some given set. 3. Cross-validation - The division of a data set into two halves, the first half of which is used to fit the model and the second half to see how well the fitted model predicts the data. The exposition is mainly by examples.

  8. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  9. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  10. Jackknife bias reduction for polychotomous logistic regression.

    PubMed

    Bull, S B; Greenwood, C M; Hauck, W W

    1997-03-15

    Despite theoretical and empirical evidence that the usual MLEs can be misleading in finite samples and some evidence that bias reduced estimates are less biased and more efficient, they have not seen a wide application in practice. One can obtain bias reduced estimates by jackknife methods, with or without full iteration, or by use of higher order terms in a Taylor series expansion of the log-likelihood to approximate asymptotic bias. We provide details of these methods for polychotomous logistic regression with a nominal categorical response. We conducted a Monte Carlo comparison of the jackknife and Taylor series estimates in moderate sample sizes in a general logistic regression setting, to investigate dichotomous and trichotomous responses and a mixture of correlated and uncorrelated binary and normal covariates. We found an approximate two-step jackknife and the Taylor series methods useful when the ratio of the number of observations to the number of parameters is greater than 15, but we cannot recommend the two-step and the fully iterated jackknife estimates when this ratio is less than 20, especially when there are large effects, binary covariates, or multicollinearity in the covariates.

  11. Regression Models For Saffron Yields in Iran

    NASA Astrophysics Data System (ADS)

    S. H, Sanaeinejad; S. N, Hosseini

    Saffron is an important crop in social and economical aspects in Khorassan Province (Northeast of Iran). In this research wetried to evaluate trends of saffron yield in recent years and to study the relationship between saffron yield and the climate change. A regression analysis was used to predict saffron yield based on 20 years of yield data in Birjand, Ghaen and Ferdows cities.Climatologically data for the same periods was provided by database of Khorassan Climatology Center. Climatologically data includedtemperature, rainfall, relative humidity and sunshine hours for ModelI, and temperature and rainfall for Model II. The results showed the coefficients of determination for Birjand, Ferdows and Ghaen for Model I were 0.69, 0.50 and 0.81 respectively. Also coefficients of determination for the same cities for model II were 0.53, 0.50 and 0.72 respectively. Multiple regression analysisindicated that among weather variables, temperature was the key parameter for variation ofsaffron yield. It was concluded that increasing temperature at spring was the main cause of declined saffron yield during recent years across the province. Finally, yield trend was predicted for the last 5 years using time series analysis.

  12. HOS network-based classification of power quality events via regression algorithms

    NASA Astrophysics Data System (ADS)

    Palomares Salas, José Carlos; González de la Rosa, Juan José; Sierra Fernández, José María; Pérez, Agustín Agüera

    2015-12-01

    This work compares seven regression algorithms implemented in artificial neural networks (ANNs) supported by 14 power-quality features, which are based in higher-order statistics. Combining time and frequency domain estimators to deal with non-stationary measurement sequences, the final goal of the system is the implementation in the future smart grid to guarantee compatibility between all equipment connected. The principal results are based in spectral kurtosis measurements, which easily adapt to the impulsive nature of the power quality events. These results verify that the proposed technique is capable of offering interesting results for power quality (PQ) disturbance classification. The best results are obtained using radial basis networks, generalized regression, and multilayer perceptron, mainly due to the non-linear nature of data.

  13. Multilogistic regression by means of evolutionary product-unit neural networks.

    PubMed

    Hervás-Martínez, C; Martínez-Estudillo, F J; Carbonero-Ruz, M

    2008-09-01

    We propose a multilogistic regression model based on the combination of linear and product-unit models, where the product-unit nonlinear functions are constructed with the product of the inputs raised to arbitrary powers. The estimation of the coefficients of the model is carried out in two phases. First, the number of product-unit basis functions and the exponents' vector are determined by means of an evolutionary neural network algorithm. Afterwards, a standard maximum likelihood optimization method determines the rest of the coefficients in the new space given by the initial variables and the product-unit basis functions previously estimated. We compare the performance of our approach with the logistic regression built on the initial variables and several learning classification techniques. The statistical test carried out on twelve benchmark datasets shows that the proposed model is competitive in terms of the accuracy of the classifier.

  14. Biological Parametric Mapping Accounting for Random Regressors with Regression Calibration and Model II Regression

    PubMed Central

    Yang, Xue; Lauzon, Carolyn B.; Crainiceanu, Ciprian; Caffo, Brian; Resnick, Susan M.; Landman, Bennett A.

    2012-01-01

    Massively univariate regression and inference in the form of statistical parametric mapping have transformed the way in which multi-dimensional imaging data are studied. In functional and structural neuroimaging, the de facto standard “design matrix”-based general linear regression model and its multi-level cousins have enabled investigation of the biological basis of the human brain. With modern study designs, it is possible to acquire multi-modal three-dimensional assessments of the same individuals — e.g., structural, functional and quantitative magnetic resonance imaging, alongside functional and ligand binding maps with positron emission tomography. Largely, current statistical methods in the imaging community assume that the regressors are non-random. For more realistic multi-parametric assessment (e.g., voxel-wise modeling), distributional consideration of all observations is appropriate. Herein, we discuss two unified regression and inference approaches, model II regression and regression calibration, for use in massively univariate inference with imaging data. These methods use the design matrix paradigm and account for both random and non-random imaging regressors. We characterize these methods in simulation and illustrate their use on an empirical dataset. Both methods have been made readily available as a toolbox plug-in for the SPM software. PMID:22609453

  15. Photonic single nonlinear-delay dynamical node for information processing

    NASA Astrophysics Data System (ADS)

    Ortín, Silvia; San-Martín, Daniel; Pesquera, Luis; Gutiérrez, José Manuel

    2012-06-01

    An electro-optical system with a delay loop based on semiconductor lasers is investigated for information processing by performing numerical simulations. This system can replace a complex network of many nonlinear elements for the implementation of Reservoir Computing. We show that a single nonlinear-delay dynamical system has the basic properties to perform as reservoir: short-term memory and separation property. The computing performance of this system is evaluated for two prediction tasks: Lorenz chaotic time series and nonlinear auto-regressive moving average (NARMA) model. We sweep the parameters of the system to find the best performance. The results achieved for the Lorenz and the NARMA-10 tasks are comparable to those obtained by other machine learning methods.

  16. Reduced-size kernel models for nonlinear hybrid system identification.

    PubMed

    Le, Van Luong; Bloch, Grard; Lauer, Fabien

    2011-12-01

    This brief paper focuses on the identification of nonlinear hybrid dynamical systems, i.e., systems switching between multiple nonlinear dynamical behaviors. Thus the aim is to learn an ensemble of submodels from a single set of input-output data in a regression setting with no prior knowledge on the grouping of the data points into similar behaviors. To be able to approximate arbitrary nonlinearities, kernel submodels are considered. However, in order to maintain efficiency when applying the method to large data sets, a preprocessing step is required in order to fix the submodel sizes and limit the number of optimization variables. This brief paper proposes four approaches, respectively inspired by the fixed-size least-squares support vector machines, the feature vector selection method, the kernel principal component regression and a modification of the latter, in order to deal with this issue and build sparse kernel submodels. These are compared in numerical experiments, which show that the proposed approach achieves the simultaneous classification of data points and approximation of the nonlinear behaviors in an efficient and accurate manner.

  17. Nonlinear metamaterials for holography.

    PubMed

    Almeida, Euclides; Bitton, Ora; Prior, Yehiam

    2016-01-01

    A hologram is an optical element storing phase and possibly amplitude information enabling the reconstruction of a three-dimensional image of an object by illumination and scattering of a coherent beam of light, and the image is generated at the same wavelength as the input laser beam. In recent years, it was shown that information can be stored in nanometric antennas giving rise to ultrathin components. Here we demonstrate nonlinear multilayer metamaterial holograms. A background free image is formed at a new frequency-the third harmonic of the illuminating beam. Using e-beam lithography of multilayer plasmonic nanoantennas, we fabricate polarization-sensitive nonlinear elements such as blazed gratings, lenses and other computer-generated holograms. These holograms are analysed and prospects for future device applications are discussed. PMID:27545581

  18. Nonlinear metamaterials for holography

    NASA Astrophysics Data System (ADS)

    Almeida, Euclides; Bitton, Ora; Prior, Yehiam

    2016-08-01

    A hologram is an optical element storing phase and possibly amplitude information enabling the reconstruction of a three-dimensional image of an object by illumination and scattering of a coherent beam of light, and the image is generated at the same wavelength as the input laser beam. In recent years, it was shown that information can be stored in nanometric antennas giving rise to ultrathin components. Here we demonstrate nonlinear multilayer metamaterial holograms. A background free image is formed at a new frequency--the third harmonic of the illuminating beam. Using e-beam lithography of multilayer plasmonic nanoantennas, we fabricate polarization-sensitive nonlinear elements such as blazed gratings, lenses and other computer-generated holograms. These holograms are analysed and prospects for future device applications are discussed.

  19. Nonlinear Photonics 2014: introduction.

    PubMed

    Akhmediev, N; Kartashov, Yaroslav

    2015-01-12

    International Conference "Nonlinear Photonics-2014" took place in Barcelona, Spain on July 27-31, 2014. It was a part of the "Advanced Photonics Congress" which is becoming a traditional notable event in the world of photonics. The current focus issue of Optics Express contains contributions from the participants of the Conference and the Congress. The articles in this focus issue by no means represent the total number of the congress contributions (around 400). However, it demonstrates wide range of topics covered at the event. The next conference of this series is to be held in 2016 in Australia, which is the home of many researchers working in the field of photonics in general and nonlinear photonics in particular.

  20. Nonlinear metamaterials for holography

    PubMed Central

    Almeida, Euclides; Bitton, Ora

    2016-01-01

    A hologram is an optical element storing phase and possibly amplitude information enabling the reconstruction of a three-dimensional image of an object by illumination and scattering of a coherent beam of light, and the image is generated at the same wavelength as the input laser beam. In recent years, it was shown that information can be stored in nanometric antennas giving rise to ultrathin components. Here we demonstrate nonlinear multilayer metamaterial holograms. A background free image is formed at a new frequency—the third harmonic of the illuminating beam. Using e-beam lithography of multilayer plasmonic nanoantennas, we fabricate polarization-sensitive nonlinear elements such as blazed gratings, lenses and other computer-generated holograms. These holograms are analysed and prospects for future device applications are discussed. PMID:27545581

  1. Nonlinear differential equations

    SciTech Connect

    Dresner, L.

    1988-01-01

    This report is the text of a graduate course on nonlinear differential equations given by the author at the University of Wisconsin-Madison during the summer of 1987. The topics covered are: direction fields of first-order differential equations; the Lie (group) theory of ordinary differential equations; similarity solutions of second-order partial differential equations; maximum principles and differential inequalities; monotone operators and iteration; complementary variational principles; and stability of numerical methods. The report should be of interest to graduate students, faculty, and practicing scientists and engineers. No prior knowledge is required beyond a good working knowledge of the calculus. The emphasis is on practical results. Most of the illustrative examples are taken from the fields of nonlinear diffusion, heat and mass transfer, applied superconductivity, and helium cryogenics.

  2. Nonlinear terahertz superconducting plasmonics

    SciTech Connect

    Wu, Jingbo; Liang, Lanju; Jin, Biaobing E-mail: tonouchi@ile.osaka-u.ac.jp Kang, Lin; Xu, Weiwei; Chen, Jian; Wu, Peiheng E-mail: tonouchi@ile.osaka-u.ac.jp; Zhang, Caihong; Kawayama, Iwao; Murakami, Hironaru; Tonouchi, Masayoshi E-mail: tonouchi@ile.osaka-u.ac.jp; Wang, Huabing

    2014-10-20

    Nonlinear terahertz (THz) transmission through subwavelength hole array in superconducting niobium nitride (NbN) film is experimentally investigated using intense THz pulses. The good agreement between the measurement and numerical simulations indicates that the field strength dependent transmission mainly arises from the nonlinear properties of the superconducting film. Under weak THz pulses, the transmission peak can be tuned over a frequency range of 145 GHz which is attributed to the high kinetic inductance of 50 nm-thick NbN film. Utilizing the THz pump-THz probe spectroscopy, we study the dynamic process of transmission spectra and demonstrate that the transition time of such superconducting plasmonic device is within 5 ps.

  3. Nonlinear chiral transport phenomena

    NASA Astrophysics Data System (ADS)

    Chen, Jiunn-Wei; Ishii, Takeaki; Pu, Shi; Yamamoto, Naoki

    2016-06-01

    We study the nonlinear responses of relativistic chiral matter to the external fields such as the electric field E , gradients of temperature and chemical potential, ∇T and ∇μ . Using the kinetic theory with Berry curvature corrections under the relaxation time approximation, we compute the transport coefficients of possible new electric currents that are forbidden in usual chirally symmetric matter but are allowed in chirally asymmetric matter by parity. In particular, we find a new type of electric current proportional to ∇μ ×E due to the interplay between the effects of the Berry curvature and collisions. We also derive an analog of the "Wiedemann-Franz" law specific for anomalous nonlinear transport in relativistic chiral matter.

  4. Multicollinearity and correlation among local regression coefficients in geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Wheeler, David; Tiefelsdorf, Michael

    2005-06-01

    Present methodological research on geographically weighted regression (GWR) focuses primarily on extensions of the basic GWR model, while ignoring well-established diagnostics tests commonly used in standard global regression analysis. This paper investigates multicollinearity issues surrounding the local GWR coefficients at a single location and the overall correlation between GWR coefficients associated with two different exogenous variables. Results indicate that the local regression coefficients are potentially collinear even if the underlying exogenous variables in the data generating process are uncorrelated. Based on these findings, applied GWR research should practice caution in substantively interpreting the spatial patterns of local GWR coefficients. An empirical disease-mapping example is used to motivate the GWR multicollinearity problem. Controlled experiments are performed to systematically explore coefficient dependency issues in GWR. These experiments specify global models that use eigenvectors from a spatial link matrix as exogenous variables.

  5. Optothermal nonlinearity of silica aerogel

    NASA Astrophysics Data System (ADS)

    Braidotti, Maria Chiara; Gentilini, Silvia; Fleming, Adam; Samuels, Michiel C.; Di Falco, Andrea; Conti, Claudio

    2016-07-01

    We report on the characterization of silica aerogel thermal optical nonlinearity, obtained by z-scan technique. The results show that typical silica aerogels have nonlinear optical coefficient similar to that of glass (≃10-12 m2/W), with negligible optical nonlinear absorption. The nonlinear coefficient can be increased to values in the range of 10-10 m2/W by embedding an absorbing dye in the aerogel. This value is one order of magnitude higher than that observed in the pure dye and in typical highly nonlinear materials like liquid crystals.

  6. Estimation of Reservoir Porosity and Water Saturation Based on Seismic Attributes Using Support Vector Regression Approach

    NASA Astrophysics Data System (ADS)

    Na'imi, S. R.; Shadizadeh, S. R.; Riahi, M. A.; Mirzakhanian, M.

    2014-08-01

    Porosity and fluid saturation distributions are crucial properties of hydrocarbon reservoirs and are involved in almost all calculations related to reservoir and production. True measurements of these parameters derived from laboratory measurements, are only available at the isolated localities of a reservoir and also are expensive and time-consuming. Therefore, employing other methodologies which have stiffness, simplicity, and cheapness is needful. Support Vector Regression approach is a moderately novel method for doing functional estimation in regression problems. Contrary to conventional neural networks which minimize the error on the training data by the use of usual Empirical Risk Minimization principle, Support Vector Regression minimizes an upper bound on the anticipated risk by means of the Structural Risk Minimization principle. This difference which is the destination in statistical learning causes greater ability of this approach for generalization tasks. In this study, first, appropriate seismic attributes which have an underlying dependency with reservoir porosity and water saturation are extracted. Subsequently, a non-linear support vector regression algorithm is utilized to obtain quantitative formulation between porosity and water saturation parameters and selected seismic attributes. For an undrilled reservoir, in which there are no sufficient core and log data, it is moderately possible to characterize hydrocarbon bearing formation by means of this method.

  7. Longitudinal clinical score prediction in Alzheimer's disease with soft-split sparse regression based random forest.

    PubMed

    Huang, Lei; Jin, Yan; Gao, Yaozong; Thung, Kim-Han; Shen, Dinggang

    2016-10-01

    Alzheimer's disease (AD) is an irreversible neurodegenerative disease and affects a large population in the world. Cognitive scores at multiple time points can be reliably used to evaluate the progression of the disease clinically. In recent studies, machine learning techniques have shown promising results on the prediction of AD clinical scores. However, there are multiple limitations in the current models such as linearity assumption and missing data exclusion. Here, we present a nonlinear supervised sparse regression-based random forest (RF) framework to predict a variety of longitudinal AD clinical scores. Furthermore, we propose a soft-split technique to assign probabilistic paths to a test sample in RF for more accurate predictions. In order to benefit from the longitudinal scores in the study, unlike the previous studies that often removed the subjects with missing scores, we first estimate those missing scores with our proposed soft-split sparse regression-based RF and then utilize those estimated longitudinal scores at all the previous time points to predict the scores at the next time point. The experiment results demonstrate that our proposed method is superior to the traditional RF and outperforms other state-of-art regression models. Our method can also be extended to be a general regression framework to predict other disease scores. PMID:27500865

  8. Is this scaling nonlinear?

    PubMed

    Leitão, J C; Miotto, J M; Gerlach, M; Altmann, E G

    2016-07-01

    One of the most celebrated findings in complex systems in the last decade is that different indexes y (e.g. patents) scale nonlinearly with the population x of the cities in which they appear, i.e. y∼x (β) ,β≠1. More recently, the generality of this finding has been questioned in studies that used new databases and different definitions of city boundaries. In this paper, we investigate the existence of nonlinear scaling, using a probabilistic framework in which fluctuations are accounted for explicitly. In particular, we show that this allows not only to (i) estimate β and confidence intervals, but also to (ii) quantify the evidence in favour of β≠1 and (iii) test the hypothesis that the observations are compatible with the nonlinear scaling. We employ this framework to compare five different models to 15 different datasets and we find that the answers to points (i)-(iii) crucially depend on the fluctuations contained in the data, on how they are modelled, and on the fact that the city sizes are heavy-tailed distributed. PMID:27493764

  9. Is this scaling nonlinear?

    PubMed Central

    2016-01-01

    One of the most celebrated findings in complex systems in the last decade is that different indexes y (e.g. patents) scale nonlinearly with the population x of the cities in which they appear, i.e. y∼xβ,β≠1. More recently, the generality of this finding has been questioned in studies that used new databases and different definitions of city boundaries. In this paper, we investigate the existence of nonlinear scaling, using a probabilistic framework in which fluctuations are accounted for explicitly. In particular, we show that this allows not only to (i) estimate β and confidence intervals, but also to (ii) quantify the evidence in favour of β≠1 and (iii) test the hypothesis that the observations are compatible with the nonlinear scaling. We employ this framework to compare five different models to 15 different datasets and we find that the answers to points (i)–(iii) crucially depend on the fluctuations contained in the data, on how they are modelled, and on the fact that the city sizes are heavy-tailed distributed. PMID:27493764

  10. Ultrafast Thermal Nonlinearity

    PubMed Central

    Khurgin, Jacob B.; Sun, Greg; Chen, Wei Ting; Tsai, Wei-Yi; Tsai, Din Ping

    2015-01-01

    Third order nonlinear optical phenomena explored in the last half century have been predicted to find wide range of applications in many walks of life, such as all-optical switching, routing, and others, yet this promise has not been fulfilled primarily because the strength of nonlinear effects is too low when they are to occur on the picosecond scale required in today’s signal processing applications. The strongest of the third-order nonlinearities, engendered by thermal effects, is considered to be too slow for the above applications. In this work we show that when optical fields are concentrated into the volumes on the scale of few tens of nanometers, the speed of the thermo-optical effects approaches picosecond scale. Such a sub-diffraction limit concentration of field can be accomplished with the use of plasmonic effects in metal nanoparticles impregnating the thermo-optic dielectric (e.g. amorphous Si) and leads to phase shifts sufficient for all optical switching on ultrafast scale. PMID:26644322

  11. Tissue non-linearity.

    PubMed

    Duck, F

    2010-01-01

    The propagation of acoustic waves is a fundamentally non-linear process, and only waves with infinitesimally small amplitudes may be described by linear expressions. In practice, all ultrasound propagation is associated with a progressive distortion in the acoustic waveform and the generation of frequency harmonics. At the frequencies and amplitudes used for medical diagnostic scanning, the waveform distortion can result in the formation of acoustic shocks, excess deposition of energy, and acoustic saturation. These effects occur most strongly when ultrasound propagates within liquids with comparatively low acoustic attenuation, such as water, amniotic fluid, or urine. Attenuation by soft tissues limits but does not extinguish these non-linear effects. Harmonics may be used to create tissue harmonic images. These offer improvements over conventional B-mode images in spatial resolution and, more significantly, in the suppression of acoustic clutter and side-lobe artefacts. The quantity B/A has promise as a parameter for tissue characterization, but methods for imaging B/A have shown only limited success. Standard methods for the prediction of tissue in-situ exposure from acoustic measurements in water, whether for regulatory purposes, for safety assessment, or for planning therapeutic regimes, may be in error because of unaccounted non-linear losses. Biological effects mechanisms are altered by finite-amplitude effects. PMID:20349813

  12. Nonlinearity without superluminality

    NASA Astrophysics Data System (ADS)

    Kent, Adrian

    2005-07-01

    Quantum theory is compatible with special relativity. In particular, though measurements on entangled systems are correlated in a way that cannot be reproduced by local hidden variables, they cannot be used for superluminal signaling. As Czachor, Gisin, and Polchinski pointed out, this is not generally true of general nonlinear modifications of the Schrödinger equation. Excluding superluminal signaling has thus been taken to rule out most nonlinear versions of quantum theory. The no-superluminal-signaling constraint has also been used for alternative derivations of the optimal fidelities attainable for imperfect quantum cloning and other operations. These results apply to theories satisfying the rule that their predictions for widely separated and slowly moving entangled systems can be approximated by nonrelativistic equations of motion with respect to a preferred time coordinate. This paper describes a natural way in which this rule might fail to hold. In particular, it is shown that quantum readout devices which display the values of localized pure states need not allow superluminal signaling, provided that the devices display the values of the states of entangled subsystems as defined in a nonstandard, although natural, way. It follows that any locally defined nonlinear evolution of pure states can be made consistent with Minkowski causality.

  13. Nonlinearity without superluminality

    SciTech Connect

    Kent, Adrian

    2005-07-15

    Quantum theory is compatible with special relativity. In particular, though measurements on entangled systems are correlated in a way that cannot be reproduced by local hidden variables, they cannot be used for superluminal signaling. As Czachor, Gisin, and Polchinski pointed out, this is not generally true of general nonlinear modifications of the Schroedinger equation. Excluding superluminal signaling has thus been taken to rule out most nonlinear versions of quantum theory. The no-superluminal-signaling constraint has also been used for alternative derivations of the optimal fidelities attainable for imperfect quantum cloning and other operations. These results apply to theories satisfying the rule that their predictions for widely separated and slowly moving entangled systems can be approximated by nonrelativistic equations of motion with respect to a preferred time coordinate. This paper describes a natural way in which this rule might fail to hold. In particular, it is shown that quantum readout devices which display the values of localized pure states need not allow superluminal signaling, provided that the devices display the values of the states of entangled subsystems as defined in a nonstandard, although natural, way. It follows that any locally defined nonlinear evolution of pure states can be made consistent with Minkowski causality.

  14. Nonlinear gyrokinetic equations

    SciTech Connect

    Dubin, D.H.E.; Krommes, J.A.; Oberman, C.; Lee, W.W.

    1983-03-01

    Nonlinear gyrokinetic equations are derived from a systematic Hamiltonian theory. The derivation employs Lie transforms and a noncanonical perturbation theory first used by Littlejohn for the simpler problem of asymptotically small gyroradius. For definiteness, we emphasize the limit of electrostatic fluctuations in slab geometry; however, there is a straight-forward generalization to arbitrary field geometry and electromagnetic perturbations. An energy invariant for the nonlinear system is derived, and various of its limits are considered. The weak turbulence theory of the equations is examined. In particular, the wave kinetic equation of Galeev and Sagdeev is derived from an asystematic truncation of the equations, implying that this equation fails to consider all gyrokinetic effects. The equations are simplified for the case of small but finite gyroradius and put in a form suitable for efficient computer simulation. Although it is possible to derive the Terry-Horton and Hasegawa-Mima equations as limiting cases of our theory, several new nonlinear terms absent from conventional theories appear and are discussed.

  15. Filamentation with nonlinear Bessel vortices.

    PubMed

    Jukna, V; Milián, C; Xie, C; Itina, T; Dudley, J; Courvoisier, F; Couairon, A

    2014-10-20

    We present a new type of ring-shaped filaments featured by stationary nonlinear high-order Bessel solutions to the laser beam propagation equation. Two different regimes are identified by direct numerical simulations of the nonlinear propagation of axicon focused Gaussian beams carrying helicity in a Kerr medium with multiphoton absorption: the stable nonlinear propagation regime corresponds to a slow beam reshaping into one of the stationary nonlinear high-order Bessel solutions, called nonlinear Bessel vortices. The region of existence of nonlinear Bessel vortices is found semi-analytically. The influence of the Kerr nonlinearity and nonlinear losses on the beam shape is presented. Direct numerical simulations highlight the role of attractors played by nonlinear Bessel vortices in the stable propagation regime. Large input powers or small cone angles lead to the unstable propagation regime where nonlinear Bessel vortices break up into an helical multiple filament pattern or a more irregular structure. Nonlinear Bessel vortices are shown to be sufficiently intense to generate a ring-shaped filamentary ionized channel in the medium which is foreseen as opening the way to novel applications in laser material processing of transparent dielectrics. PMID:25401574

  16. Research in nonlinear structural and solid mechanics

    NASA Technical Reports Server (NTRS)

    Mccomb, H. G., Jr. (Compiler); Noor, A. K. (Compiler)

    1980-01-01

    Nonlinear analysis of building structures and numerical solution of nonlinear algebraic equations and Newton's method are discussed. Other topics include: nonlinear interaction problems; solution procedures for nonlinear problems; crash dynamics and advanced nonlinear applications; material characterization, contact problems, and inelastic response; and formulation aspects and special software for nonlinear analysis.

  17. Tolerance bounds for log gamma regression models

    NASA Technical Reports Server (NTRS)

    Jones, R. A.; Scholz, F. W.; Ossiander, M.; Shorack, G. R.

    1985-01-01

    The present procedure for finding lower confidence bounds for the quantiles of Weibull populations, on the basis of the solution of a quadratic equation, is more accurate than current Monte Carlo tables and extends to any location-scale family. It is shown that this method is accurate for all members of the log gamma(K) family, where K = 1/2 to infinity, and works well for censored data, while also extending to regression data. An even more accurate procedure involving an approximation to the Lawless (1982) conditional procedure, with numerical integrations whose tables are independent of the data, is also presented. These methods are applied to the case of failure strengths of ceramic specimens from each of three billets of Si3N4, which have undergone flexural strength testing.

  18. Sparse brain network using penalized linear regression

    NASA Astrophysics Data System (ADS)

    Lee, Hyekyoung; Lee, Dong Soo; Kang, Hyejin; Kim, Boong-Nyun; Chung, Moo K.

    2011-03-01

    Sparse partial correlation is a useful connectivity measure for brain networks when it is difficult to compute the exact partial correlation in the small-n large-p setting. In this paper, we formulate the problem of estimating partial correlation as a sparse linear regression with a l1-norm penalty. The method is applied to brain network consisting of parcellated regions of interest (ROIs), which are obtained from FDG-PET images of the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. To validate the results, we check their reproducibilities of the obtained brain networks by the leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.

  19. Macrophages, dendritic cells, and regression of atherosclerosis.

    PubMed

    Feig, Jonathan E; Feig, Jessica L

    2012-01-01

    Atherosclerosis is the number one cause of death in the Western world. It results from the interaction between modified lipoproteins and cells such as macrophages, dendritic cells (DCs), T cells, and other cellular elements present in the arterial wall. This inflammatory process can ultimately lead to the development of complex lesions, or plaques, that protrude into the arterial lumen. Ultimately, plaque rupture and thrombosis can occur leading to the clinical complications of myocardial infarction or stroke. Although each of the cell types plays roles in the pathogenesis of atherosclerosis, the focus of this review will be primarily on the macrophages and DCs. The role of these two cell types in atherosclerosis is discussed, with a particular emphasis on their involvement in atherosclerosis regression.

  20. [Logistic regression against a divergent Bayesian network].

    PubMed

    Sánchez Trujillo, Noel Antonio

    2015-02-03

    This article is a discussion about two statistical tools used for prediction and causality assessment: logistic regression and Bayesian networks. Using data of a simulated example from a study assessing factors that might predict pulmonary emphysema (where fingertip pigmentation and smoking are considered); we posed the following questions. Is pigmentation a confounding, causal or predictive factor? Is there perhaps another factor, like smoking, that confounds? Is there a synergy between pigmentation and smoking? The results, in terms of prediction, are similar with the two techniques; regarding causation, differences arise. We conclude that, in decision-making, the sum of both: a statistical tool, used with common sense, and previous evidence, taking years or even centuries to develop; is better than the automatic and exclusive use of statistical resources.

  1. Estimates on compressed neural networks regression.

    PubMed

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  2. Cervical Cancer Regression Measured Using Weekly Magnetic Resonance Imaging During Fractionated Radiotherapy: Radiobiologic Modeling and Correlation With Tumor Hypoxia

    SciTech Connect

    Lim, Karen; Chan, Philip; Haider, Masoom; Cho, Young-Bin; Hill, Richard P.; Milosevic, Michael

    2008-01-01

    Purpose: To measure regression of cancer of the uterine cervix during external beam radiotherapy using magnetic resonance imaging, derive radiobiologic parameters from a mathematical model of tumor regression, and compare these parameters with the pretreatment measurements of tumor hypoxia. Methods and Materials: A total of 27 eligible patients undergoing external beam radiotherapy for cervical cancer underwent weekly magnetic resonance imaging scans. The tumor volume was assessed on each of these scans and the rate of regression plotted. A radiobiologic model was formulated to simulate the effect on tumor regression of the surviving proportion of cells after 2 Gy (SP{sub 2}), the cell clearance constant (clearance of irreparably damaged cells from the tumor [T{sub c}]), and accelerated repopulation. Nonlinear regression analysis was used to fit the radiobiologic model to the magnetic resonance imaging-derived tumor volumes and to derive the estimates of SP{sub 2} and T{sub c} for each patient. These were compared to the pretreatment hypoxia measurements. Results: The initial tumor volume was 8-209 cm{sup 3}. The relative reduction in volume during treatment was 0.02-0.79. The simulations using representative values of the independent biologic variables derived from published data showed SP{sub 2} and T{sub c} to strongly influence the shape of the volume-response curves. Nonlinear regression analysis yielded a median SP{sub 2} of 0.71 and median T{sub c} of 10 days. Tumors with a high SP{sub 2} >0.71 were significantly more hypoxic at diagnosis (p = 0.02). Conclusion: The results of our study have shown that cervical cancer regresses during external beam radiotherapy, although marked variability is present among patients and is influenced by underlying biologic processes, including cellular sensitivity to radiotherapy and proliferation. Better understanding of the biologic mechanisms might facilitate novel adaptive treatment strategies in future studies.

  3. Identification of nonlinear boundary effects using nonlinear normal modes

    NASA Astrophysics Data System (ADS)

    Ahmadian, Hamid; Zamani, Arash

    2009-08-01

    Local nonlinear effects due to micro-slip/slap introduced in boundaries of structures have dominant influence on their lower modal model. This paper studies these effects by experimentally observing the behavior of a clamped-free beam structure with local nonlinearities due to micro-slip at the clamped end. The structure is excited near one of its resonance frequencies and recorded responses are employed to identify the nonlinear effects at the boundary. The nonlinear response of structure is defined using an amplitude-dependent nonlinear normal mode identified from measured responses. A new method for reconstructing nonlinear normal mode is represented in this paper by relating the nonlinear normal mode to the clamped end displacement-dependent stiffness parameters using an eigensensitivity analysis. Solution of obtained equations results equivalent stiffness models at different vibration amplitudes and the corresponding nonlinear normal mode is identified. The approach results nonlinear modes with efficient capabilities in predicting dynamical behavior of the structure at different loading conditions. To evaluate the efficiency of the identified model, the structure is excited at higher excitation load levels than those employed in identification procedures and the observed responses are compared with the predictions of the model at the corresponding input force levels. The predictions are in good agreement with the observed behavior indicating success of identification procedure in capturing the physical merits involve in the boundary local nonlinearities.

  4. Collaborative regression-based anatomical landmark detection

    NASA Astrophysics Data System (ADS)

    Gao, Yaozong; Shen, Dinggang

    2015-12-01

    Anatomical landmark detection plays an important role in medical image analysis, e.g. for registration, segmentation and quantitative analysis. Among the various existing methods for landmark detection, regression-based methods have recently attracted much attention due to their robustness and efficiency. In these methods, landmarks are localised through voting from all image voxels, which is completely different from the classification-based methods that use voxel-wise classification to detect landmarks. Despite their robustness, the accuracy of regression-based landmark detection methods is often limited due to (1) the inclusion of uninformative image voxels in the voting procedure, and (2) the lack of effective ways to incorporate inter-landmark spatial dependency into the detection step. In this paper, we propose a collaborative landmark detection framework to address these limitations. The concept of collaboration is reflected in two aspects. (1) Multi-resolution collaboration. A multi-resolution strategy is proposed to hierarchically localise landmarks by gradually excluding uninformative votes from faraway voxels. Moreover, for informative voxels near the landmark, a spherical sampling strategy is also designed at the training stage to improve their prediction accuracy. (2) Inter-landmark collaboration. A confidence-based landmark detection strategy is proposed to improve the detection accuracy of ‘difficult-to-detect’ landmarks by using spatial guidance from ‘easy-to-detect’ landmarks. To evaluate our method, we conducted experiments extensively on three datasets for detecting prostate landmarks and head & neck landmarks in computed tomography images, and also dental landmarks in cone beam computed tomography images. The results show the effectiveness of our collaborative landmark detection framework in improving landmark detection accuracy, compared to other state-of-the-art methods.

  5. Computing confidence intervals for standardized regression coefficients.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2013-12-01

    With fixed predictors, the standard method (Cohen, Cohen, West, & Aiken, 2003, p. 86; Harris, 2001, p. 80; Hays, 1994, p. 709) for computing confidence intervals (CIs) for standardized regression coefficients fails to account for the sampling variability of the criterion standard deviation. With random predictors, this method also fails to account for the sampling variability of the predictor standard deviations. Nevertheless, under some conditions the standard method will produce CIs with accurate coverage rates. To delineate these conditions, we used a Monte Carlo simulation to compute empirical CI coverage rates in samples drawn from 36 populations with a wide range of data characteristics. We also computed the empirical CI coverage rates for 4 alternative methods that have been discussed in the literature: noncentrality interval estimation, the delta method, the percentile bootstrap, and the bias-corrected and accelerated bootstrap. Our results showed that for many data-parameter configurations--for example, sample size, predictor correlations, coefficient of determination (R²), orientation of β with respect to the eigenvectors of the predictor correlation matrix, RX--the standard method produced coverage rates that were close to their expected values. However, when population R² was large and when β approached the last eigenvector of RX, then the standard method coverage rates were frequently below the nominal rate (sometimes by a considerable amount). In these conditions, the delta method and the 2 bootstrap procedures were consistently accurate. Results using noncentrality interval estimation were inconsistent. In light of these findings, we recommend that researchers use the delta method to evaluate the sampling variability of standardized regression coefficients.

  6. Logistic Regression Applied to Seismic Discrimination

    SciTech Connect

    BG Amindan; DN Hagedorn

    1998-10-08

    The usefulness of logistic discrimination was examined in an effort to learn how it performs in a regional seismic setting. Logistic discrimination provides an easily understood method, works with user-defined models and few assumptions about the population distributions, and handles both continuous and discrete data. Seismic event measurements from a data set compiled by Los Alamos National Laboratory (LANL) of Chinese events recorded at station WMQ were used in this demonstration study. PNNL applied logistic regression techniques to the data. All possible combinations of the Lg and Pg measurements were tried, and a best-fit logistic model was created. The best combination of Lg and Pg frequencies for predicting the source of a seismic event (earthquake or explosion) used Lg{sub 3.0-6.0} and Pg{sub 3.0-6.0} as the predictor variables. A cross-validation test was run, which showed that this model was able to correctly predict 99.7% earthquakes and 98.0% explosions for this given data set. Two other models were identified that used Pg and Lg measurements from the 1.5 to 3.0 Hz frequency range. Although these other models did a good job of correctly predicting the earthquakes, they were not as effective at predicting the explosions. Two possible biases were discovered which affect the predicted probabilities for each outcome. The first bias was due to this being a case-controlled study. The sampling fractions caused a bias in the probabilities that were calculated using the models. The second bias is caused by a change in the proportions for each event. If at a later date the proportions (a priori probabilities) of explosions versus earthquakes change, this would cause a bias in the predicted probability for an event. When using logistic regression, the user needs to be aware of the possible biases and what affect they will have on the predicted probabilities.

  7. Least-Squares Linear Regression and Schrodinger's Cat: Perspectives on the Analysis of Regression Residuals.

    ERIC Educational Resources Information Center

    Hecht, Jeffrey B.

    The analysis of regression residuals and detection of outliers are discussed, with emphasis on determining how deviant an individual data point must be to be considered an outlier and the impact that multiple suspected outlier data points have on the process of outlier determination and treatment. Only bivariate (one dependent and one independent)…

  8. Nonlinear models for estimating GSFC travel requirements

    NASA Technical Reports Server (NTRS)

    Buffalano, C.; Hagan, F. J.

    1974-01-01

    A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.

  9. Atavistic regression as a factor in the remission of cancer.

    PubMed

    Meares, A

    1977-07-23

    It is suggested that the atavistic regression of the mind in intensive meditation is accompanied by a similar physiological regression, and that this may involve the immune system and so influence the patient's defences against cancer.

  10. Vacuum Rabi splitting effect in nanomechanical QED system with nonlinear resonator

    NASA Astrophysics Data System (ADS)

    Zhao, MingYue; Gao, YiBo

    2016-08-01

    Considering the intrinsic nonlinearity in a nanomechanical resonator coupled to a charge qubit, vacuum Rabi splitting effect is studied in a nanomechanical QED (qubit-resonator) system. A driven nonlinear Jaynes-Cummings model describes the dynamics of this qubit-resonator system. Using quantum regression theorem and master equation approach, we have calculated the two-time correlation spectrum analytically. In the weak driving limit, these analytical results clarify the influence of the driving strength and nonlinearity parameter on the correlation spectrum. Also, numerical calculations confirm these analytical results.

  11. Spatial vulnerability assessments by regression kriging

    NASA Astrophysics Data System (ADS)

    Pásztor, László; Laborczi, Annamária; Takács, Katalin; Szatmári, Gábor

    2016-04-01

    information representing IEW or GRP forming environmental factors were taken into account to support the spatial inference of the locally experienced IEW frequency and measured GRP values respectively. An efficient spatial prediction methodology was applied to construct reliable maps, namely regression kriging (RK) using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Application of RK also provides the possibility of inherent accuracy assessment. The resulting maps are characterized by global and local measures of its accuracy. Additionally the method enables interval estimation for spatial extension of the areas of predefined risk categories. All of these outputs provide useful contribution to spatial planning, action planning and decision making. Acknowledgement: Our work was partly supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  12. A new form of bivariate generalized Poisson regression model

    NASA Astrophysics Data System (ADS)

    Faroughi, Pouya; Ismail, Noriszura

    2014-09-01

    This paper introduces a new form of bivariate generalized Poisson (BGP) regression which can be fitted to bivariate and correlated count data with covariates. The BGP regression suggested in this study can be fitted not only to bivariate count data with positive, zero or negative correlations, but also to underdispersed or overdispersed bivariate count data. Applications of bivariate Poisson (BP) regression and the new BGP regression are illustrated on Malaysian motor insurance data.

  13. Frequency domain nonlinear optics

    NASA Astrophysics Data System (ADS)

    Legare, Francois

    2016-05-01

    The universal dilemma of gain narrowing occurring in fs amplifiers prevents ultra-high power lasers from delivering few-cycle pulses. This problem is overcome by a new amplification concept: Frequency domain Optical Parametric Amplification - FOPA. It enables simultaneous up-scaling of peak power and amplified spectral bandwidth and can be performed at any wavelength range of conventional amplification schemes, however, with the capability to amplify single cycles of light. The key idea for amplification of octave-spanning spectra without loss of spectral bandwidth is to amplify the broad spectrum ``slice by slice'' in the frequency domain, i.e. in the Fourier plane of a 4f-setup. The striking advantages of this scheme, are its capability to amplify (more than) one octave of bandwidth without shorting the corresponding pulse duration. This is because ultrabroadband phase matching is not defined by the properties of the nonlinear crystal employed but the number of crystals employed. In the same manner, to increase the output energy one simply has to increase the spectral extension in the Fourier plane and to add one more crystal. Thus, increasing pulse energy and shortening its duration accompany each other. A proof of principle experiment was carried out at ALLS on the sub-two cycle IR beam line and yielded record breaking performance in the field of few-cycle IR lasers. 100 μJ two-cycle pulses from a hollow core fibre compression setup were amplified to 1.43mJ without distorting spatial or temporal properties. Pulse duration at the input of FOPA and after FOPA remains the same. Recently, we have started upgrading this system to be pumped by 250 mJ to reach 40 mJ two-cycle IR few-cycle pulses and latest results will be presented at the conference. Furthermore, the extension of the concept of FOPA to other nonlinear optical processes will be discussed. Frequency domain nonlinear optics.

  14. Nonlinear magnetohydrodynamics from gravity

    NASA Astrophysics Data System (ADS)

    Hansen, James; Kraus, Per

    2009-04-01

    We apply the recently established connection between nonlinear fluid dynamics and AdS gravity to the case of the dyonic black brane in AdS4. This yields the equations of fluid dynamics for a 2+1 dimensional charged fluid in a background magnetic field. We construct the gravity solution to second order in the derivative expansion. From this we find the fluid dynamical stress tensor and charge current to second and third order in derivatives respectively, along with values for the associated transport coefficients.

  15. Optical correlator tracking nonlinearity

    NASA Astrophysics Data System (ADS)

    Gregory, Don A.; Kirsch, James C.; Johnson, John L.

    1987-01-01

    A limitation observed in the tracking ability of optical correlators is reported. It is shown by calculations that an inherent nonlinearity exists in many optical correlator configurations, with the problem manifesting itself in a mismatch of the input scene with the position of the correlation signal. Results indicate that some care must be given to the selection of components and their configuration in constructing an optical correlator which exhibits true translational invariance. An input test scene is shown along with the correlation spot and cross hairs from a contrast detector; the offset is apparent.

  16. Nonlinear methods for communications

    NASA Astrophysics Data System (ADS)

    1992-08-01

    An innovative communication system has been developed. This system has the potential for improved secure communication for covert operations. By modulating data on the chaotic signal used to synchronize two nonlinear systems, they have created a Low Probability of Intercept (LPI) communications system. The researchers derived the equations which govern the system, made models of the system, and performed numerical simulations to test these models. The theoretical and numerical studies of this system have been validated by experiment. A recent design improvement has led to a system that synchronizes at 0 db Signal-to-Noise. This development holds the promise of a Low Probability of Detection (LPD) system.

  17. Logistic Regression: Going beyond Point-and-Click.

    ERIC Educational Resources Information Center

    King, Jason E.

    A review of the literature reveals that important statistical algorithms and indices pertaining to logistic regression are being underused. This paper describes logistic regression in comparison with discriminant analysis and linear regression, and suggests that some techniques only accessible through computer syntax should be consulted in…

  18. Using Time-Series Regression to Predict Academic Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…

  19. Using Leverage and Influence to Introduce Regression Diagnostics.

    ERIC Educational Resources Information Center

    Hoaglin, David C.

    1988-01-01

    Techniques for teaching linear regression are provided. Discussed are leverage and the hat matrix in simple regression, residuals, the notion of leaving out each observation individually, and use of this to study influence on fitted values and to define residuals. Finally, corresponding diagnostics for multiple regression are discussed. (MNS)

  20. Applications of statistics to medical science, III. Correlation and regression.

    PubMed

    Watanabe, Hiroshi

    2012-01-01

    In this third part of a series surveying medical statistics, the concepts of correlation and regression are reviewed. In particular, methods of linear regression and logistic regression are discussed. Arguments related to survival analysis will be made in a subsequent paper.

  1. Testing Different Model Building Procedures Using Multiple Regression.

    ERIC Educational Resources Information Center

    Thayer, Jerome D.

    The stepwise regression method of selecting predictors for computer assisted multiple regression analysis was compared with forward, backward, and best subsets regression, using 16 data sets. The results indicated the stepwise method was preferred because of its practical nature, when the models chosen by different selection methods were similar…

  2. Relationship between Multiple Regression and Selected Multivariable Methods.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.

    The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…

  3. Using Dominance Analysis to Determine Predictor Importance in Logistic Regression

    ERIC Educational Resources Information Center

    Azen, Razia; Traxel, Nicole

    2009-01-01

    This article proposes an extension of dominance analysis that allows researchers to determine the relative importance of predictors in logistic regression models. Criteria for choosing logistic regression R[superscript 2] analogues were determined and measures were selected that can be used to perform dominance analysis in logistic regression. A…

  4. Father regression. Clinical narratives and theoretical reflections.

    PubMed

    Stein, Ruth

    2006-08-01

    The author deals with love-hate enthrallment and submission to a primitive paternal object. This is a father-son relationship that extends through increasing degrees of 'primitiveness' or extremeness, and is illustrated through three different constellations that constitute a continuum. One pole of the continuum encompasses certain male patients who show a loving, de-individuated connection to a father experienced as trustworthy, soft, and in need of protection. Further along the continuum is the case of a transsexual patient whose analysis revealed an intense 'God-transference', a bondage to an idealized, feared, and ostensibly protective father-God introject. A great part of this patient's analysis consisted in a fierce struggle to liberate himself from this figure. The other end of the continuum is occupied by religious terrorists, who exemplify the most radical thralldom to a persecutory, godly object, a regressive submission that banishes woman and enthrones a cruel superego, and that ends in destruction and self-destruction. Psychoanalytic thinking has traditionally dealt with the oedipal father and recently with the nurturing father, but there is a gap in thinking about the phallic, archaic father, and his relations with his son(s). The author aims at filling this gap, at the same time as she also raises the very question of 'What is a father?' linking it with literary and religious themes. PMID:16877249

  5. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  6. Flexible regression models over river networks

    PubMed Central

    O’Donnell, David; Rushworth, Alastair; Bowman, Adrian W; Marian Scott, E; Hallard, Mark

    2014-01-01

    Many statistical models are available for spatial data but the vast majority of these assume that spatial separation can be measured by Euclidean distance. Data which are collected over river networks constitute a notable and commonly occurring exception, where distance must be measured along complex paths and, in addition, account must be taken of the relative flows of water into and out of confluences. Suitable models for this type of data have been constructed based on covariance functions. The aim of the paper is to place the focus on underlying spatial trends by adopting a regression formulation and using methods which allow smooth but flexible patterns. Specifically, kernel methods and penalized splines are investigated, with the latter proving more suitable from both computational and modelling perspectives. In addition to their use in a purely spatial setting, penalized splines also offer a convenient route to the construction of spatiotemporal models, where data are available over time as well as over space. Models which include main effects and spatiotemporal interactions, as well as seasonal terms and interactions, are constructed for data on nitrate pollution in the River Tweed. The results give valuable insight into the changes in water quality in both space and time. PMID:25653460

  7. A rotor optimization using regression analysis

    NASA Technical Reports Server (NTRS)

    Giansante, N.

    1984-01-01

    The design and development of helicopter rotors is subject to the many design variables and their interactions that effect rotor operation. Until recently, selection of rotor design variables to achieve specified rotor operational qualities has been a costly, time consuming, repetitive task. For the past several years, Kaman Aerospace Corporation has successfully applied multiple linear regression analysis, coupled with optimization and sensitivity procedures, in the analytical design of rotor systems. It is concluded that approximating equations can be developed rapidly for a multiplicity of objective and constraint functions and optimizations can be performed in a rapid and cost effective manner; the number and/or range of design variables can be increased by expanding the data base and developing approximating functions to reflect the expanded design space; the order of the approximating equations can be expanded easily to improve correlation between analyzer results and the approximating equations; gradients of the approximating equations can be calculated easily and these gradients are smooth functions reducing the risk of numerical problems in the optimization; the use of approximating functions allows the problem to be started easily and rapidly from various initial designs to enhance the probability of finding a global optimum; and the approximating equations are independent of the analysis or optimization codes used.

  8. Cyclodextrin promotes atherosclerosis regression via macrophage reprogramming

    PubMed Central

    Zimmer, Sebastian; Grebe, Alena; Bakke, Siril S.; Bode, Niklas; Halvorsen, Bente; Ulas, Thomas; Skjelland, Mona; De Nardo, Dominic; Labzin, Larisa I.; Kerksiek, Anja; Hempel, Chris; Heneka, Michael T.; Hawxhurst, Victoria; Fitzgerald, Michael L; Trebicka, Jonel; Gustafsson, Jan-Åke; Westerterp, Marit; Tall, Alan R.; Wright, Samuel D.; Espevik, Terje; Schultze, Joachim L.; Nickenig, Georg; Lütjohann, Dieter; Latz, Eicke

    2016-01-01

    Atherosclerosis is an inflammatory disease linked to elevated blood cholesterol levels. Despite ongoing advances in the prevention and treatment of atherosclerosis, cardiovascular disease remains the leading cause of death worldwide. Continuous retention of apolipoprotein B-containing lipoproteins in the subendothelial space causes a local overabundance of free cholesterol. Since cholesterol accumulation and deposition of cholesterol crystals (CCs) triggers a complex inflammatory response, we tested the efficacy of the cyclic oligosaccharide 2-hydroxypropyl-β-cyclodextrin (CD), a compound that increases cholesterol solubility, in preventing and reversing atherosclerosis. Here we show that CD treatment of murine atherosclerosis reduced atherosclerotic plaque size and CC load, and promoted plaque regression even with a continued cholesterol-rich diet. Mechanistically, CD increased oxysterol production in both macrophages and human atherosclerotic plaques, and promoted liver X receptor (LXR)-mediated transcriptional reprogramming to improve cholesterol efflux and exert anti-inflammatory effects. In vivo, this CD-mediated LXR agonism was required for the anti-atherosclerotic and anti-inflammatory effects of CD as well as for augmented reverse cholesterol transport. Since CD treatment in humans is safe and CD beneficially affects key mechanisms of atherogenesis, it may therefore be used clinically to prevent or treat human atherosclerosis. PMID:27053774

  9. Cyclodextrin promotes atherosclerosis regression via macrophage reprogramming.

    PubMed

    Zimmer, Sebastian; Grebe, Alena; Bakke, Siril S; Bode, Niklas; Halvorsen, Bente; Ulas, Thomas; Skjelland, Mona; De Nardo, Dominic; Labzin, Larisa I; Kerksiek, Anja; Hempel, Chris; Heneka, Michael T; Hawxhurst, Victoria; Fitzgerald, Michael L; Trebicka, Jonel; Björkhem, Ingemar; Gustafsson, Jan-Åke; Westerterp, Marit; Tall, Alan R; Wright, Samuel D; Espevik, Terje; Schultze, Joachim L; Nickenig, Georg; Lütjohann, Dieter; Latz, Eicke

    2016-04-01

    Atherosclerosis is an inflammatory disease linked to elevated blood cholesterol concentrations. Despite ongoing advances in the prevention and treatment of atherosclerosis, cardiovascular disease remains the leading cause of death worldwide. Continuous retention of apolipoprotein B-containing lipoproteins in the subendothelial space causes a local overabundance of free cholesterol. Because cholesterol accumulation and deposition of cholesterol crystals (CCs) trigger a complex inflammatory response, we tested the efficacy of the cyclic oligosaccharide 2-hydroxypropyl-β-cyclodextrin (CD), a compound that increases cholesterol solubility in preventing and reversing atherosclerosis. We showed that CD treatment of murine atherosclerosis reduced atherosclerotic plaque size and CC load and promoted plaque regression even with a continued cholesterol-rich diet. Mechanistically, CD increased oxysterol production in both macrophages and human atherosclerotic plaques and promoted liver X receptor (LXR)-mediated transcriptional reprogramming to improve cholesterol efflux and exert anti-inflammatory effects. In vivo, this CD-mediated LXR agonism was required for the antiatherosclerotic and anti-inflammatory effects of CD as well as for augmented reverse cholesterol transport. Because CD treatment in humans is safe and CD beneficially affects key mechanisms of atherogenesis, it may therefore be used clinically to prevent or treat human atherosclerosis. PMID:27053774

  10. Z-scan theory for nonlocal nonlinear media with simultaneous nonlinear refraction and nonlinear absorption.

    PubMed

    Rashidian Vaziri, Mohammad Reza

    2013-07-10

    In this paper, the Z-scan theory for nonlocal nonlinear media has been further developed when nonlinear absorption and nonlinear refraction appear simultaneously. To this end, the nonlinear photoinduced phase shift between the impinging and outgoing Gaussian beams from a nonlocal nonlinear sample has been generalized. It is shown that this kind of phase shift will reduce correctly to its known counterpart for the case of pure refractive nonlinearity. Using this generalized form of phase shift, the basic formulas for closed- and open-aperture beam transmittances in the far field have been provided, and a simple procedure for interpreting the Z-scan results has been proposed. In this procedure, by separately performing open- and closed-aperture Z-scan experiments and using the represented relations for the far-field transmittances, one can measure the nonlinear absorption coefficient and nonlinear index of refraction as well as the order of nonlocality. Theoretically, it is shown that when the absorptive nonlinearity is present in addition to the refractive nonlinearity, the sample nonlocal response can noticeably suppress the peak and enhance the valley of the Z-scan closed-aperture transmittance curves, which is due to the nonlocal action's ability to change the beam transverse dimensions.

  11. Nonlinear refraction in vitreous humor.

    PubMed

    Rockwell, B A; Roach, W P; Rogers, M E; Mayo, M W; Toth, C A; Cain, C P; Noojin, G D

    1993-11-01

    We extend the application of the z-scan technique to determine the nonlinear refractive index (n(2)) for human and rabbit vitreous humor, water, and physiological saline. In these measurements there were nonlinear contributions to the measured signal from the aqueous samples and the quartz cell that held the sample. Measurements were made with 60-ps pulses at 532 nm. To our knowledge, this is the first measurement of the nonlinear refractive properties of biological material. PMID:19829406

  12. Nonlinear ultrasonic phased array imaging.

    PubMed

    Potter, J N; Croxford, A J; Wilcox, P D

    2014-10-01

    This Letter reports a technique for the imaging of acoustic nonlinearity. By contrasting the energy of the diffuse field produced through the focusing of an ultrasonic array by delayed parallel element transmission with that produced by postprocessing of sequential transmission data, acoustic nonlinearity local to the focal point is measured. Spatially isolated wave distortion is inferred without requiring interrogation of the wave at the inspection point, thereby allowing nonlinear imaging through depth.

  13. NONLINEAR ATOM OPTICS

    SciTech Connect

    T. MILONNI; G. CSANAK; ET AL

    1999-07-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The project objectives were to explore theoretically various aspects of nonlinear atom optics effects in cold-atom waves and traps. During the project a major development occurred the observation, by as many as a dozen experimental groups, of Bose-Einstein condensation (BEC) in cold-atom traps. This stimulated us to focus our attention on those aspects of nonlinear atom optics relating to BEC, in addition to continuing our work on a nonequilibrium formalism for dealing with the interaction of an electromagnetic field with multi-level atomic systems, allowing for macroscopic coherence effects such as BEC. Studies of several problems in BEC physics have been completed or are near completion, including the suggested use of external electric fields to modify the nature of the interatomic interaction in cold-atom traps; properties of two-phase condensates; and molecular loss processes associated with BEC experiments involving a so-called Feshbach resonance.

  14. Improved nonlinear prediction method

    NASA Astrophysics Data System (ADS)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  15. Nonlinear Attitude Filtering Methods

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Crassidis, John L.; Cheng, Yang

    2005-01-01

    This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.

  16. Problems in nonlinear resistive MHD

    SciTech Connect

    Turnbull, A.D.; Strait, E.J.; La Haye, R.J.; Chu, M.S.; Miller, R.L.

    1998-12-31

    Two experimentally relevant problems can relatively easily be tackled by nonlinear MHD codes. Both problems require plasma rotation in addition to the nonlinear mode coupling and full geometry already incorporated into the codes, but no additional physics seems to be crucial. These problems discussed here are: (1) nonlinear coupling and interaction of multiple MHD modes near the B limit and (2) nonlinear coupling of the m/n = 1/1 sawtooth mode with higher n gongs and development of seed islands outside q = 1.

  17. Nonlinear ptychographic coherent diffractive imaging.

    PubMed

    Odstrcil, M; Baksh, P; Gawith, C; Vrcelj, R; Frey, J G; Brocklesby, W S

    2016-09-01

    Ptychographic Coherent diffractive imaging (PCDI) is a significant advance in imaging allowing the measurement of the full electric field at a sample without use of any imaging optics. So far it has been confined solely to imaging of linear optical responses. In this paper we show that because of the coherence-preserving nature of nonlinear optical interactions, PCDI can be generalised to nonlinear optical imaging. We demonstrate second harmonic generation PCDI, directly revealing phase information about the nonlinear coefficients, and showing the general applicability of PCDI to nonlinear interactions. PMID:27607631

  18. Rank-preserving regression: a more robust rank regression model against outliers.

    PubMed

    Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M

    2016-08-30

    Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26934999

  19. Rank-preserving regression: a more robust rank regression model against outliers.

    PubMed

    Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M

    2016-08-30

    Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd.

  20. [Spatial interpolation of soil organic matter using regression Kriging and geographically weighted regression Kriging].

    PubMed

    Yang, Shun-hua; Zhang, Hai-tao; Guo, Long; Ren, Yan

    2015-06-01

    Relative elevation and stream power index were selected as auxiliary variables based on correlation analysis for mapping soil organic matter. Geographically weighted regression Kriging (GWRK) and regression Kriging (RK) were used for spatial interpolation of soil organic matter and compared with ordinary Kriging (OK), which acts as a control. The results indicated that soil or- ganic matter was significantly positively correlated with relative elevation whilst it had a significantly negative correlation with stream power index. Semivariance analysis showed that both soil organic matter content and its residuals (including ordinary least square regression residual and GWR resi- dual) had strong spatial autocorrelation. Interpolation accuracies by different methods were esti- mated based on a data set of 98 validation samples. Results showed that the mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) of RK were respectively 39.2%, 17.7% and 20.6% lower than the corresponding values of OK, with a relative-improvement (RI) of 20.63. GWRK showed a similar tendency, having its ME, MAE and RMSE to be respectively 60.6%, 23.7% and 27.6% lower than those of OK, with a RI of 59.79. Therefore, both RK and GWRK significantly improved the accuracy of OK interpolation of soil organic matter due to their in- corporation of auxiliary variables. In addition, GWRK performed obviously better than RK did in this study, and its improved performance should be attributed to the consideration of sample spatial locations. PMID:26572015

  1. Regional flood frequency analysis using spatial proximity and basin characteristics: Quantile regression vs. parameter regression technique

    NASA Astrophysics Data System (ADS)

    Ahn, Kuk-Hyun; Palmer, Richard

    2016-09-01

    Despite wide use of regression-based regional flood frequency analysis (RFFA) methods, the majority are based on either ordinary least squares (OLS) or generalized least squares (GLS). This paper proposes 'spatial proximity' based RFFA methods using the spatial lagged model (SLM) and spatial error model (SEM). The proposed methods are represented by two frameworks: the quantile regression technique (QRT) and parameter regression technique (PRT). The QRT develops prediction equations for flooding quantiles in average recurrence intervals (ARIs) of 2, 5, 10, 20, and 100 years whereas the PRT provides prediction of three parameters for the selected distribution. The proposed methods are tested using data incorporating 30 basin characteristics from 237 basins in Northeastern United States. Results show that generalized extreme value (GEV) distribution properly represents flood frequencies in the study gages. Also, basin area, stream network, and precipitation seasonality are found to be the most effective explanatory variables in prediction modeling by the QRT and PRT. 'Spatial proximity' based RFFA methods provide reliable flood quantile estimates compared to simpler methods. Compared to the QRT, the PRT may be recommended due to its accuracy and computational simplicity. The results presented in this paper may serve as one possible guidepost for hydrologists interested in flood analysis at ungaged sites.

  2. Sirenomelia and severe caudal regression syndrome

    PubMed Central

    Seidahmed, Mohammed Z.; Abdelbasit, Omer B.; Alhussein, Khalid A.; Miqdad, Abeer M.; Khalil, Mohammed I.; Salih, Mustafa A.

    2014-01-01

    Objective: To describe cases of sirenomelia and severe caudal regression syndrome (CRS), to report the prevalence of sirenomelia, and compare our findings with the literature. Methods: Retrospective data was retrieved from the medical records of infants with the diagnosis of sirenomelia and CRS and their mothers from 1989 to 2010 (22 years) at the Security Forces Hospital, Riyadh, Saudi Arabia. A perinatologist, neonatologist, pediatric neurologist, and radiologist ascertained the diagnoses. The cases were identified as part of a study of neural tube defects during that period. A literature search was conducted using MEDLINE. Results: During the 22-year study period, the total number of deliveries was 124,933 out of whom, 4 patients with sirenomelia, and 2 patients with severe forms of CRS were identified. All the patients with sirenomelia had single umbilical artery, and none were the infant of a diabetic mother. One patient was a twin, and another was one of triplets. The 2 patients with CRS were sisters, their mother suffered from type II diabetes mellitus and morbid obesity on insulin, and neither of them had a single umbilical artery. Other associated anomalies with sirenomelia included an absent radius, thumb, and index finger in one patient, Potter’s syndrome, abnormal ribs, microphthalmia, congenital heart disease, hypoplastic lungs, and diaphragmatic hernia. Conclusion: The prevalence of sirenomelia (3.2 per 100,000) is high compared with the international prevalence of one per 100,000. Both cases of CRS were infants of type II diabetic mother with poor control, supporting the strong correlation of CRS and maternal diabetes. PMID:25551110

  3. Deep Human Parsing with Active Template Regression.

    PubMed

    Liang, Xiaodan; Liu, Si; Shen, Xiaohui; Yang, Jianchao; Liu, Luoqi; Dong, Jian; Lin, Liang; Yan, Shuicheng

    2015-12-01

    In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28]. PMID:26539846

  4. Deep Human Parsing with Active Template Regression.

    PubMed

    Liang, Xiaodan; Liu, Si; Shen, Xiaohui; Yang, Jianchao; Liu, Luoqi; Dong, Jian; Lin, Liang; Yan, Shuicheng

    2015-12-01

    In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28].

  5. TOPICAL REVIEW: Nonlinear photonic crystals: III. Cubic nonlinearity

    NASA Astrophysics Data System (ADS)

    Babin, Anatoli; Figotin, Alexander

    2003-10-01

    Weakly nonlinear interactions between wavepackets in a lossless periodic dielectric medium are studied based on the classical Maxwell equations with a cubic nonlinearity. We consider nonlinear processes such that: (i) the amplitude of the wave component due to the nonlinearity does not exceed the amplitude of its linear component; (ii) the spatial range of a probing wavepacket is much smaller than the dimension of the medium sample, and it is not too small compared with the dimension of the primitive cell. These nonlinear processes are naturally described in terms of the cubic interaction phase function based on the dispersion relations of the underlying linear periodic medium. It turns out that only a few quadruplets of modes have significant nonlinear interactions. They are singled out by a system of selection rules including the group velocity, frequency and phase matching conditions. It turns out that the intrinsic symmetries of the cubic interaction phase stemming from assumed inversion symmetry of the dispersion relations play a significant role in the cubic nonlinear interactions. We also study canonical forms of the cubic interaction phase leading to a complete quantitative classification of all possible significant cubic interactions. The classification is ultimately based on a universal system of indices reflecting the intensity of nonlinear interactions.

  6. Dislocation nonlinearity and nonlinear wave processes in polycrystals with dislocations

    NASA Astrophysics Data System (ADS)

    Nazarov, V. E.

    2016-09-01

    Based on the modification of the linear part of the Granato-Lücke dislocation theory of absorption, the equation of state of polycrystalline solids with dissipative and reactive nonlinearity has been derived. The nonlinear effects of the interaction and self-action of longitudinal elastic waves in such media have been theoretically studied.

  7. Predictive Regression Models of Monthly Seismic Energy Emissions Induced by Longwall Mining

    NASA Astrophysics Data System (ADS)

    Jakubowski, Jacek; Tajduś, Antoni

    2014-10-01

    This article presents the development and validation of predictive regression models of longwall mining-induced seismicity, based on observations in 63 longwalls, in 12 seams, in the Bielszowice colliery in the Upper Silesian Coal Basin, which took place between 1992 and 2012. A predicted variable is the logarithm of the monthly sum of seismic energy induced in a longwall area. The set of predictors include seven quantitative and qualitative variables describing some mining and geological conditions and earlier seismicity in longwalls. Two machine learning methods have been used to develop the models: boosted regression trees and neural networks. Two types of model validation have been applied: on a random validation sample and on a time-based validation sample. The set of a few selected variables enabled nonlinear regression models to be built which gave relatively small prediction errors, taking the complex and strongly stochastic nature of the phenomenon into account. The article presents both the models of periodic forecasting for the following month as well as long-term forecasting.

  8. Comparative analysis of neural network and regression based condition monitoring approaches for wind turbine fault detection

    NASA Astrophysics Data System (ADS)

    Schlechtingen, Meik; Ferreira Santos, Ilmar

    2011-07-01

    This paper presents the research results of a comparison of three different model based approaches for wind turbine fault detection in online SCADA data, by applying developed models to five real measured faults and anomalies. The regression based model as the simplest approach to build a normal behavior model is compared to two artificial neural network based approaches, which are a full signal reconstruction and an autoregressive normal behavior model. Based on a real time series containing two generator bearing damages the capabilities of identifying the incipient fault prior to the actual failure are investigated. The period after the first bearing damage is used to develop the three normal behavior models. The developed or trained models are used to investigate how the second damage manifests in the prediction error. Furthermore the full signal reconstruction and the autoregressive approach are applied to further real time series containing gearbox bearing damages and stator temperature anomalies. The comparison revealed all three models being capable of detecting incipient faults. However, they differ in the effort required for model development and the remaining operational time after first indication of damage. The general nonlinear neural network approaches outperform the regression model. The remaining seasonality in the regression model prediction error makes it difficult to detect abnormality and leads to increased alarm levels and thus a shorter remaining operational period. For the bearing damages and the stator anomalies under investigation the full signal reconstruction neural network gave the best fault visibility and thus led to the highest confidence level.

  9. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  10. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  11. Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.

    2015-12-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.

  12. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    NASA Astrophysics Data System (ADS)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  13. On Insensitivity of the Chi-Square Model Test to Nonlinear Misspecification in Structural Equation Models

    ERIC Educational Resources Information Center

    Mooijaart, Ab; Satorra, Albert

    2009-01-01

    In this paper, we show that for some structural equation models (SEM), the classical chi-square goodness-of-fit test is unable to detect the presence of nonlinear terms in the model. As an example, we consider a regression model with latent variables and interactions terms. Not only the model test has zero power against that type of…

  14. Bayesian Analysis of Nonlinear Structural Equation Models with Nonignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum

    2006-01-01

    A Bayesian approach is developed for analyzing nonlinear structural equation models with nonignorable missing data. The nonignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm is used to produce the joint Bayesian estimates of…

  15. Can Flexible Non-Linear Modeling Tell Us Anything New about Educational Productivity?

    ERIC Educational Resources Information Center

    Baker, Bruce D.

    2001-01-01

    Explores whether flexible nonlinear models (including neural networks and genetic algorithms) can reveal otherwise unexpected patterns of relationship in typical school-productivity data. Applying three types of algorithms alongside regression modeling to school-level data in 183 elementary schools proves the hypothesis and reveals new directions…

  16. Visualizing Confidence Bands for Semiparametrically Estimated Nonlinear Relations among Latent Variables

    ERIC Educational Resources Information Center

    Pek, Jolynn; Chalmers, R. Philip; Kok, Bethany E.; Losardo, Diane

    2015-01-01

    Structural equation mixture models (SEMMs), when applied as a semiparametric model (SPM), can adequately recover potentially nonlinear latent relationships without their specification. This SPM is useful for exploratory analysis when the form of the latent regression is unknown. The purpose of this article is to help users familiar with structural…

  17. Multimodal Nonlinear Optical Microscopy

    PubMed Central

    Yue, Shuhua; Slipchenko, Mikhail N.; Cheng, Ji-Xin

    2013-01-01

    Because each nonlinear optical (NLO) imaging modality is sensitive to specific molecules or structures, multimodal NLO imaging capitalizes the potential of NLO microscopy for studies of complex biological tissues. The coupling of multiphoton fluorescence, second harmonic generation, and coherent anti-Stokes Raman scattering (CARS) has allowed investigation of a broad range of biological questions concerning lipid metabolism, cancer development, cardiovascular disease, and skin biology. Moreover, recent research shows the great potential of using CARS microscope as a platform to develop more advanced NLO modalities such as electronic-resonance-enhanced four-wave mixing, stimulated Raman scattering, and pump-probe microscopy. This article reviews the various approaches developed for realization of multimodal NLO imaging as well as developments of new NLO modalities on a CARS microscope. Applications to various aspects of biological and biomedical research are discussed. PMID:24353747

  18. Nonlinear integrable ion traps

    SciTech Connect

    Nagaitsev, S.; Danilov, V.; /SNS Project, Oak Ridge

    2011-10-01

    Quadrupole ion traps can be transformed into nonlinear traps with integrable motion by adding special electrostatic potentials. This can be done with both stationary potentials (electrostatic plus a uniform magnetic field) and with time-dependent electric potentials. These potentials are chosen such that the single particle Hamilton-Jacobi equations of motion are separable in some coordinate systems. The electrostatic potentials have several free adjustable parameters allowing for a quadrupole trap to be transformed into, for example, a double-well or a toroidal-well system. The particle motion remains regular, non-chaotic, integrable in quadratures, and stable for a wide range of parameters. We present two examples of how to realize such a system in case of a time-independent (the Penning trap) as well as a time-dependent (the Paul trap) configuration.

  19. Spherically symmetric nonlinear structures

    NASA Astrophysics Data System (ADS)

    Calzetta, Esteban A.; Kandus, Alejandra

    1997-02-01

    We present an analytical method to extract observational predictions about the nonlinear evolution of perturbations in a Tolman universe. We assume no a priori profile for them. We solve perturbatively a Hamilton-Jacobi equation for a timelike geodesic and obtain the null one as a limiting case in two situations: for an observer located in the center of symmetry and for a noncentered one. In the first case we find expressions to evaluate the density contrast and the number count and luminosity distance versus redshift relationships up to second order in the perturbations. In the second situation we calculate the CMBR anisotropies at large angular scales produced by the density contrast and by the asymmetry of the observer's location, up to first order in the perturbations. We develop our argument in such a way that the formulas are valid for any shape of the primordial spectrum.

  20. Nonlinearities in vegetation functioning

    NASA Astrophysics Data System (ADS)

    Ceballos-Núñez, Verónika; Müller, Markus; Metzler, Holger; Sierra, Carlos

    2016-04-01

    Given the current drastic changes in climate and atmospheric CO2 concentrations, and the role of vegetation in the global carbon cycle, there is increasing attention to the carbon allocation component in biosphere terrestrial models. Improving the representation of C allocation in models could be the key to having better predictions of the fate of C once it enters the vegetation and is partitioned to C pools of different residence times. C allocation has often been modeled using systems of ordinary differential equations, and it has been hypothesized that most models can be generalized with a specific form of a linear dynamical system. However, several studies have highlighted discrepancies between empirical observations and model predictions, attributing these differences to problems with model structure. Although efforts have been made to compare different models, the outcome of these qualitative assessments has been a conceptual categorization of them. In this contribution, we introduce a new effort to identify the main properties of groups of models by studying their mathematical structure. For this purpose, we performed a literature research of the relevant models of carbon allocation in vegetation and developed a database with their representation in symbolic mathematics. We used the Python package SymPy for symbolic mathematics as a common language and manipulated the models to calculate their Jacobian matrix at fixed points and their eigenvalues, among other mathematical analyses. Our preliminary results show a tendency of inverse proportionality between model complexity and size of time/space scale; complex interactions between the variables controlling carbon allocation in vegetation tend to operate at shorter time/space scales, and vice-versa. Most importantly, we found that although the linear structure is common, other structures with non-linearities have been also proposed. We, therefore, propose a new General Model that can accommodate these

  1. Adaptive nonlinear flight control

    NASA Astrophysics Data System (ADS)

    Rysdyk, Rolf Theoduor

    1998-08-01

    Research under supervision of Dr. Calise and Dr. Prasad at the Georgia Institute of Technology, School of Aerospace Engineering. has demonstrated the applicability of an adaptive controller architecture. The architecture successfully combines model inversion control with adaptive neural network (NN) compensation to cancel the inversion error. The tiltrotor aircraft provides a specifically interesting control design challenge. The tiltrotor aircraft is capable of converting from stable responsive fixed wing flight to unstable sluggish hover in helicopter configuration. It is desirable to provide the pilot with consistency in handling qualities through a conversion from fixed wing flight to hover. The linear model inversion architecture was adapted by providing frequency separation in the command filter and the error-dynamics, while not exiting the actuator modes. This design of the architecture provides for a model following setup with guaranteed performance. This in turn allowed for convenient implementation of guaranteed handling qualities. A rigorous proof of boundedness is presented making use of compact sets and the LaSalle-Yoshizawa theorem. The analysis allows for the addition of the e-modification which guarantees boundedness of the NN weights in the absence of persistent excitation. The controller is demonstrated on the Generic Tiltrotor Simulator of Bell-Textron and NASA Ames R.C. The model inversion implementation is robustified with respect to unmodeled input dynamics, by adding dynamic nonlinear damping. A proof of boundedness of signals in the system is included. The effectiveness of the robustification is also demonstrated on the XV-15 tiltrotor. The SHL Perceptron NN provides a more powerful application, based on the universal approximation property of this type of NN. The SHL NN based architecture is also robustified with the dynamic nonlinear damping. A proof of boundedness extends the SHL NN augmentation with robustness to unmodeled actuator

  2. Kepler AutoRegressive Planet Search

    NASA Astrophysics Data System (ADS)

    Caceres, Gabriel Antonio; Feigelson, Eric

    2016-01-01

    The Kepler AutoRegressive Planet Search (KARPS) project uses statistical methodology associated with autoregressive (AR) processes to model Kepler lightcurves in order to improve exoplanet transit detection in systems with high stellar variability. We also introduce a planet-search algorithm to detect transits in time-series residuals after application of the AR models. One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The variability displayed by many stars may have autoregressive properties, wherein later flux values are correlated with previous ones in some manner. Our analysis procedure consisting of three steps: pre-processing of the data to remove discontinuities, gaps and outliers; AR-type model selection and fitting; and transit signal search of the residuals using a new Transit Comb Filter (TCF) that replaces traditional box-finding algorithms. The analysis procedures of the project are applied to a portion of the publicly available Kepler light curve data for the full 4-year mission duration. Tests of the methods have been made on a subset of Kepler Objects of Interest (KOI) systems, classified both as planetary `candidates' and `false positives' by the Kepler Team, as well as a random sample of unclassified systems. We find that the ARMA-type modeling successfully reduces the stellar variability, by a factor of 10 or more in active stars and by smaller factors in more quiescent stars. A typical quiescent Kepler star has an interquartile range (IQR) of ~10 e-/sec, which may improve slightly after modeling, while those with IQR ranging from 20 to 50 e-/sec, have improvements from 20% up to 70%. High activity stars (IQR exceeding 100) markedly improve. A periodogram based on the TCF is constructed to concentrate the signal of these periodic spikes. When a periodic transit is found, the model is displayed on a standard period-folded averaged light curve. Our findings to date on real

  3. Linearization of Conservative Nonlinear Oscillators

    ERIC Educational Resources Information Center

    Belendez, A.; Alvarez, M. L.; Fernandez, E.; Pascual, I.

    2009-01-01

    A linearization method of the nonlinear differential equation for conservative nonlinear oscillators is analysed and discussed. This scheme is based on the Chebyshev series expansion of the restoring force which allows us to obtain a frequency-amplitude relation which is valid not only for small but also for large amplitudes and, sometimes, for…

  4. Nonlinear diffusion and superconducting hysteresis

    SciTech Connect

    Mayergoyz, I.D.

    1996-12-31

    Nonlinear diffusion of electromagnetic fields in superconductors with ideal and gradual resistive transitions is studied. Analytical results obtained for linear and nonlinear polarizations of electromagnetic fields are reported. These results lead to various extensions of the critical state model for superconducting hysteresis.

  5. Passive linearization of nonlinear resonances

    NASA Astrophysics Data System (ADS)

    Habib, G.; Grappasonni, C.; Kerschen, G.

    2016-07-01

    The objective of this paper is to demonstrate that the addition of properly tuned nonlinearities to a nonlinear system can increase the range over which a specific resonance responds linearly. Specifically, we seek to enforce two important properties of linear systems, namely, the force-displacement proportionality and the invariance of resonance frequencies. Numerical simulations and experiments are used to validate the theoretical findings.

  6. Solving Nonlinear Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Mitchell, L.; David, J.

    1986-01-01

    Harmonic balance method developed to obtain approximate steady-state solutions for nonlinear coupled ordinary differential equations. Method usable with transfer matrices commonly used to analyze shaft systems. Solution to nonlinear equation, with periodic forcing function represented as sum of series similar to Fourier series but with form of terms suggested by equation itself.

  7. Understanding Child Stunting in India: A Comprehensive Analysis of Socio-Economic, Nutritional and Environmental Determinants Using Additive Quantile Regression

    PubMed Central

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A.

    2013-01-01

    Background Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. Objective We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Design Using cross-sectional data for children aged 0–24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. Results At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Conclusions Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role. PMID:24223839

  8. Nonlinear Oscillators in Space Physics

    NASA Technical Reports Server (NTRS)

    Lester,Daniel; Thronson, Harley

    2011-01-01

    We discuss dynamical systems that produce an oscillation without an external time dependent source. Numerical results are presented for nonlinear oscillators in the Em1h's atmosphere, foremost the quasi-biennial oscillation (QBOl. These fluid dynamical oscillators, like the solar dynamo, have in common that one of the variables in a governing equation is strongly nonlinear and that the nonlinearity, to first order, has particular form. of 3rd or odd power. It is shown that this form of nonlinearity can produce the fundamental li'equency of the internal oscillation. which has a period that is favored by the dynamical condition of the fluid. The fundamental frequency maintains the oscillation, with no energy input to the system at that particular frequency. Nonlinearities of 2nd or even power could not maintain the oscillation.

  9. Properties of Nonlinear Dynamo Waves

    NASA Technical Reports Server (NTRS)

    Tobias, S. M.

    1997-01-01

    Dynamo theory offers the most promising explanation of the generation of the sun's magnetic cycle. Mean field electrodynamics has provided the platform for linear and nonlinear models of solar dynamos. However, the nonlinearities included are (necessarily) arbitrarily imposed in these models. This paper conducts a systematic survey of the role of nonlinearities in the dynamo process, by considering the behaviour of dynamo waves in the nonlinear regime. It is demonstrated that only by considering realistic nonlinearities that are non-local in space and time can modulation of the basic dynamo wave he achieved. Moreover, this modulation is greatest when there is a large separation of timescales provided by including a low magnetic Prandtl number in the equation for the velocity perturbations.

  10. Solar cycle in current reanalyses: (non)linear attribution study

    NASA Astrophysics Data System (ADS)

    Kuchar, A.; Sacha, P.; Miksovsky, J.; Pisoft, P.

    2014-12-01

    This study focusses on the variability of temperature, ozone and circulation characteristics in the stratosphere and lower mesosphere with regard to the influence of the 11 year solar cycle. It is based on attribution analysis using multiple nonlinear techniques (Support Vector Regression, Neural Networks) besides the traditional linear approach. The analysis was applied to several current reanalysis datasets for the 1979-2013 period, including MERRA, ERA-Interim and JRA-55, with the aim to compare how this type of data resolves especially the double-peaked solar response in temperature and ozone variables and the consequent changes induced by these anomalies. Equatorial temperature signals in the lower and upper stratosphere were found to be sufficiently robust and in qualitative agreement with previous observational studies. The analysis also pointed to the solar signal in the ozone datasets (i.e. MERRA and ERA-Interim) not being consistent with the observed double-peaked ozone anomaly extracted from satellite measurements. Consequently the results obtained by linear regression were confirmed by the nonlinear approach through all datasets, suggesting that linear regression is a relevant tool to sufficiently resolve the solar signal in the middle atmosphere. Furthermore, the seasonal dependence of the solar response was also discussed, mainly as a source of dynamical causalities in the wave propagation characteristics in the zonal wind and the induced meridional circulation in the winter hemispheres. The hypothetical mechanism of a weaker Brewer Dobson circulation was reviewed together with discussion of polar vortex stability.

  11. Enhancement of Visual Field Predictions with Pointwise Exponential Regression (PER) and Pointwise Linear Regression (PLR)

    PubMed Central

    Morales, Esteban; de Leon, John Mark S.; Abdollahi, Niloufar; Yu, Fei; Nouri-Mahdavi, Kouros; Caprioli, Joseph

    2016-01-01

    Purpose The study was conducted to evaluate threshold smoothing algorithms to enhance prediction of the rates of visual field (VF) worsening in glaucoma. Methods We studied 798 patients with primary open-angle glaucoma and 6 or more years of follow-up who underwent 8 or more VF examinations. Thresholds at each VF location for the first 4 years or first half of the follow-up time (whichever was greater) were smoothed with clusters defined by the nearest neighbor (NN), Garway-Heath, Glaucoma Hemifield Test (GHT), and weighting by the correlation of rates at all other VF locations. Thresholds were regressed with a pointwise exponential regression (PER) model and a pointwise linear regression (PLR) model. Smaller root mean square error (RMSE) values of the differences between the observed and the predicted thresholds at last two follow-ups indicated better model predictions. Results The mean (SD) follow-up times for the smoothing and prediction phase were 5.3 (1.5) and 10.5 (3.9) years. The mean RMSE values for the PER and PLR models were unsmoothed data, 6.09 and 6.55; NN, 3.40 and 3.42; Garway-Heath, 3.47 and 3.48; GHT, 3.57 and 3.74; and correlation of rates, 3.59 and 3.64. Conclusions Smoothed VF data predicted better than unsmoothed data. Nearest neighbor provided the best predictions; PER also predicted consistently more accurately than PLR. Smoothing algorithms should be used when forecasting VF results with PER or PLR. Translational Relevance The application of smoothing algorithms on VF data can improve forecasting in VF points to assist in treatment decisions. PMID:26998405

  12. [The net analyte preprocessing combined with radial basis partial least squares regression applied in noninvasive measurement of blood glucose].

    PubMed

    Li, Qing-Bo; Huang, Zheng-Wei

    2014-02-01

    In order to improve the prediction accuracy of quantitative analysis model in the near-infrared spectroscopy of blood glucose, this paper, by combining net analyte preprocessing (NAP) algorithm and radial basis functions partial least squares (RBFPLS) regression, builds a nonlinear model building method which is suitable for glucose measurement of human, named as NAP-RBFPLS. First, NAP is used to pre-process the near-infrared spectroscopy of blood glucose, in order to effectively extract the information which only relates to glucose signal from the original near-infrared spectra, so that it could effectively weaken the occasional correlation problems of the glucose changes and the interference factors which are caused by the absorption of water, albumin, hemoglobin, fat and other components of the blood in human body, the change of temperature of human body, the drift of measuring instruments, the changes of measuring environment, and the changes of measuring conditions; and then a nonlinear quantitative analysis model is built with the near-infrared spectroscopy data after NAP, in order to solve the nonlinear relationship between glucose concentrations and near-infrared spectroscopy which is caused by body strong scattering. In this paper, the new method is compared with other three quantitative analysis models building on partial least squares (PLS), net analyte preprocessing partial least squares (NAP-PLS) and RBFPLS respectively. At last, the experimental results show that the nonlinear calibration model, developed by combining NAP algorithm and RBFPLS regression, which was put forward in this paper, greatly improves the prediction accuracy of prediction sets, and what has been proved in this paper is that the nonlinear model building method will produce practical applications for the research of non-invasive detection techniques on human glucose concentrations.

  13. Nonlinear optical whispering gallery mode resonators

    NASA Technical Reports Server (NTRS)

    Ilchenko, Vladimir (Inventor); Matsko, Andrey B. (Inventor); Savchenkov, Anatoliy (Inventor); Maleki, Lutfollah (Inventor)

    2005-01-01

    Whispering gallery mode (WGM) optical resonators comprising nonlinear optical materials, where the nonlinear optical material of a WGM resonator includes a plurality of sectors within the optical resonator and nonlinear coefficients of two adjacent sectors are oppositely poled.

  14. Local Linear Regression for Data with AR Errors

    PubMed Central

    Li, Runze; Li, Yan

    2009-01-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set. PMID:20161374

  15. Estimating peak flow characteristics at ungaged sites by ridge regression

    USGS Publications Warehouse

    Tasker, Gary D.

    1982-01-01

    A regression simulation model, is combined with a multisite streamflow generator to simulate a regional regression of 50-year peak discharge against a set of basin characteristics. Monte Carlo experiments are used to compare the unbiased ordinary lease squares parameter estimator with Hoerl and Kennard's (1970a) ridge estimator in which the biasing parameter is that proposed by Hoerl, Kennard, and Baldwin (1975). The simulation results indicate a substantial improvement in parameter estimation using ridge regression when the correlation between basin characteristics is more than about 0.90. In addition, results indicate a strong potential for improving the mean square error of prediction of a peak-flow characteristic versus basin characteristics regression model when the basin characteristics are approximately colinear. The simulation covers a range of regression parameters, streamflow statistics, and basin characteristics commonly found in regional regression studies.

  16. Algorithm For Solution Of Subset-Regression Problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, Michel

    1991-01-01

    Reliable and flexible algorithm for solution of subset-regression problem performs QR decomposition with new column-pivoting strategy, enables selection of subset directly from originally defined regression parameters. This feature, in combination with number of extensions, makes algorithm very flexible for use in analysis of subset-regression problems in which parameters have physical meanings. Also extended to enable joint processing of columns contaminated by noise with those free of noise, without using scaling techniques.

  17. Nonlinear Elastic Wave NDE II. Nonlinear Wave Modulation Spectroscopy and Nonlinear Time Reversed Acoustics

    NASA Astrophysics Data System (ADS)

    Sutin, A. M.; Johnson, P. A.

    2005-04-01

    This paper presents the second part of the review of Nonlinear Elastic Wave Spectroscopy (NEWS) in NDE, and describe two different methods of nonlinear NDE that provide not only damage detection but location as well. Nonlinear Wave Modulation Spectroscopy is based on the application of an ultrasonic probe signal modulated by a low frequency vibration. Damage location can be obtained by application of Impulse Modulation Techniques that exploit the modulation of a short pulse reflected from a damage feature (e.g. crack) by low frequency vibration. Nonlinear Time Reversed Acoustic methods provide the means to focus acoustic energy to any point in a solid. In combination, we are applying the focusing properties of TRA and the nonlinear properties of cracks to locate them.

  18. Regression tree modeling of forest NPP using site conditions and climate variables across eastern USA

    NASA Astrophysics Data System (ADS)

    Kwon, Y.

    2013-12-01

    As evidence of global warming continue to increase, being able to predict forest response to climate changes, such as expected rise of temperature and precipitation, will be vital for maintaining the sustainability and productivity of forests. To map forest species redistribution by climate change scenario has been successful, however, most species redistribution maps lack mechanistic understanding to explain why trees grow under the novel conditions of chaining climate. Distributional map is only capable of predicting under the equilibrium assumption that the communities would exist following a prolonged period under the new climate. In this context, forest NPP as a surrogate for growth rate, the most important facet that determines stand dynamics, can lead to valid prediction on the transition stage to new vegetation-climate equilibrium as it represents changes in structure of forest reflecting site conditions and climate factors. The objective of this study is to develop forest growth map using regression tree analysis by extracting large-scale non-linear structures from both field-based FIA and remotely sensed MODIS data set. The major issue addressed in this approach is non-linear spatial patterns of forest attributes. Forest inventory data showed complex spatial patterns that reflect environmental states and processes that originate at different spatial scales. At broad scales, non-linear spatial trends in forest attributes and mixture of continuous and discrete types of environmental variables make traditional statistical (multivariate regression) and geostatistical (kriging) models inefficient. It calls into question some traditional underlying assumptions of spatial trends that uncritically accepted in forest data. To solve the controversy surrounding the suitability of forest data, regression tree analysis are performed using Software See5 and Cubist. Four publicly available data sets were obtained: First, field-based Forest Inventory and Analysis (USDA

  19. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects

    NASA Astrophysics Data System (ADS)

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-01

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.

  20. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects.

    PubMed

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-21

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI. PMID:26948513