Sample records for adaptive regression modeling

  1. Bayesian Adaptive Lasso for Ordinal Regression with Latent Variables

    ERIC Educational Resources Information Center

    Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan

    2017-01-01

    We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…

  2. Validation of cross-sectional time series and multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents using doubly labeled water

    USDA-ARS?s Scientific Manuscript database

    Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant cha...

  3. Newer classification and regression tree techniques: Bagging and Random Forests for ecological prediction

    Treesearch

    Anantha M. Prasad; Louis R. Iverson; Andy Liaw; Andy Liaw

    2006-01-01

    We evaluated four statistical models - Regression Tree Analysis (RTA), Bagging Trees (BT), Random Forests (RF), and Multivariate Adaptive Regression Splines (MARS) - for predictive vegetation mapping under current and future climate scenarios according to the Canadian Climate Centre global circulation model.

  4. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  5. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  6. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press. With a CD: data, software, guides. (2009). 2. Kanevski M. Spatial Predictions of Soil Contamination Using General Regression Neural Networks. Systems Research and Information Systems, Volume 8, number 4, 1999. 3. Robert S., Foresti L., Kanevski M. Spatial prediction of monthly wind speeds in complex terrain with adaptive general regression neural networks. International Journal of Climatology, 33 pp. 1793-1804, 2013.

  7. Automated time series forecasting for biosurveillance.

    PubMed

    Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit

    2007-09-30

    For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.

  8. Mixed geographically weighted regression (MGWR) model with weighted adaptive bi-square for case of dengue hemorrhagic fever (DHF) in Surakarta

    NASA Astrophysics Data System (ADS)

    Astuti, H. N.; Saputro, D. R. S.; Susanti, Y.

    2017-06-01

    MGWR model is combination of linear regression model and geographically weighted regression (GWR) model, therefore, MGWR model could produce parameter estimation that had global parameter estimation, and other parameter that had local parameter in accordance with its observation location. The linkage between locations of the observations expressed in specific weighting that is adaptive bi-square. In this research, we applied MGWR model with weighted adaptive bi-square for case of DHF in Surakarta based on 10 factors (variables) that is supposed to influence the number of people with DHF. The observation unit in the research is 51 urban villages and the variables are number of inhabitants, number of houses, house index, many public places, number of healthy homes, number of Posyandu, area width, level population density, welfare of the family, and high-region. Based on this research, we obtained 51 MGWR models. The MGWR model were divided into 4 groups with significant variable is house index as a global variable, an area width as a local variable and the remaining variables vary in each. Global variables are variables that significantly affect all locations, while local variables are variables that significantly affect a specific location.

  9. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  10. PM10 modeling in the Oviedo urban area (Northern Spain) by using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza

    2014-10-01

    The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the multivariate adaptive regression splines (MARS) technique, conclusions of this research work are exposed.

  11. On the interest of combining an analog model to a regression model for the adaptation of the downscaling link. Application to probabilistic prediction of precipitation over France.

    NASA Astrophysics Data System (ADS)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2016-04-01

    Scenarios of surface weather required for the impact studies have to be unbiased and adapted to the space and time scales of the considered hydro-systems. Hence, surface weather scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Among those SDMs, approaches based on regression are often applied. For a given station, a regression link can be established between a set of large scale atmospheric predictors and the surface weather variable. These links are then used for the prediction of the latter. However, physical processes generating surface weather vary in time. This is well known for precipitation for instance. The most relevant predictors and the regression link are also likely to vary in time. A better prediction skill is thus classically obtained with a seasonal stratification of the data. Another strategy is to identify the most relevant predictor set and establish the regression link from dates that are similar - or analog - to the target date. In practice, these dates can be selected thanks to an analog model. In this study, we explore the possibility of improving the local performance of an analog model - where the analogy is applied to the geopotential heights 1000 and 500 hPa - using additional local scale predictors for the probabilistic prediction of the Safran precipitation over France. For each prediction day, the prediction is obtained from two GLM regression models - for both the occurrence and the quantity of precipitation - for which predictors and parameters are estimated from the analog dates. Firstly, the resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. Secondly, the selected predictors for a given prediction depend on the large scale situation and on the considered region. Finally, even with such an adaptive predictor identification, the downscaling link appears to be robust: for a same prediction day, predictors selected for different locations of a given region are similar and the regression parameters are consistent within the region of interest.

  12. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  13. Domain-Invariant Partial-Least-Squares Regression.

    PubMed

    Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne

    2018-05-11

    Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.

  14. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    NASA Astrophysics Data System (ADS)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2018-01-01

    Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  15. Discriminating between adaptive and carcinogenic liver hypertrophy in rat studies using logistic ridge regression analysis of toxicogenomic data: The mode of action and predictive models.

    PubMed

    Liu, Shujie; Kawamoto, Taisuke; Morita, Osamu; Yoshinari, Kouichi; Honda, Hiroshi

    2017-03-01

    Chemical exposure often results in liver hypertrophy in animal tests, characterized by increased liver weight, hepatocellular hypertrophy, and/or cell proliferation. While most of these changes are considered adaptive responses, there is concern that they may be associated with carcinogenesis. In this study, we have employed a toxicogenomic approach using a logistic ridge regression model to identify genes responsible for liver hypertrophy and hypertrophic hepatocarcinogenesis and to develop a predictive model for assessing hypertrophy-inducing compounds. Logistic regression models have previously been used in the quantification of epidemiological risk factors. DNA microarray data from the Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System were used to identify hypertrophy-related genes that are expressed differently in hypertrophy induced by carcinogens and non-carcinogens. Data were collected for 134 chemicals (72 non-hypertrophy-inducing chemicals, 27 hypertrophy-inducing non-carcinogenic chemicals, and 15 hypertrophy-inducing carcinogenic compounds). After applying logistic ridge regression analysis, 35 genes for liver hypertrophy (e.g., Acot1 and Abcc3) and 13 genes for hypertrophic hepatocarcinogenesis (e.g., Asns and Gpx2) were selected. The predictive models built using these genes were 94.8% and 82.7% accurate, respectively. Pathway analysis of the genes indicates that, aside from a xenobiotic metabolism-related pathway as an adaptive response for liver hypertrophy, amino acid biosynthesis and oxidative responses appear to be involved in hypertrophic hepatocarcinogenesis. Early detection and toxicogenomic characterization of liver hypertrophy using our models may be useful for predicting carcinogenesis. In addition, the identified genes provide novel insight into discrimination between adverse hypertrophy associated with carcinogenesis and adaptive hypertrophy in risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Multivariate adaptive regression splines analysis to predict biomarkers of spontaneous preterm birth.

    PubMed

    Menon, Ramkumar; Bhat, Geeta; Saade, George R; Spratt, Heidi

    2014-04-01

    To develop classification models of demographic/clinical factors and biomarker data from spontaneous preterm birth in African Americans and Caucasians. Secondary analysis of biomarker data using multivariate adaptive regression splines (MARS), a supervised machine learning algorithm method. Analysis of data on 36 biomarkers from 191 women was reduced by MARS to develop predictive models for preterm birth in African Americans and Caucasians. Maternal plasma, cord plasma collected at admission for preterm or term labor and amniotic fluid at delivery. Data were partitioned into training and testing sets. Variable importance, a relative indicator (0-100%) and area under the receiver operating characteristic curve (AUC) characterized results. Multivariate adaptive regression splines generated models for combined and racially stratified biomarker data. Clinical and demographic data did not contribute to the model. Racial stratification of data produced distinct models in all three compartments. In African Americans maternal plasma samples IL-1RA, TNF-α, angiopoietin 2, TNFRI, IL-5, MIP1α, IL-1β and TGF-α modeled preterm birth (AUC train: 0.98, AUC test: 0.86). In Caucasians TNFR1, ICAM-1 and IL-1RA contributed to the model (AUC train: 0.84, AUC test: 0.68). African Americans cord plasma samples produced IL-12P70, IL-8 (AUC train: 0.82, AUC test: 0.66). Cord plasma in Caucasians modeled IGFII, PDGFBB, TGF-β1 , IL-12P70, and TIMP1 (AUC train: 0.99, AUC test: 0.82). Amniotic fluid in African Americans modeled FasL, TNFRII, RANTES, KGF, IGFI (AUC train: 0.95, AUC test: 0.89) and in Caucasians, TNF-α, MCP3, TGF-β3 , TNFR1 and angiopoietin 2 (AUC train: 0.94 AUC test: 0.79). Multivariate adaptive regression splines models multiple biomarkers associated with preterm birth and demonstrated racial disparity. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.

  17. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  18. GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA

    PubMed Central

    Zheng, Qi; Peng, Limin; He, Xuming

    2015-01-01

    Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high dimensional covariates primarily focuses on examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal. PMID:26604424

  19. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  20. An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.

    ERIC Educational Resources Information Center

    Bockenholt, Ulf

    1999-01-01

    Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…

  1. Discriminating between adaptive and carcinogenic liver hypertrophy in rat studies using logistic ridge regression analysis of toxicogenomic data: The mode of action and predictive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Shujie; Kawamoto, Taisuke; Morita, Osamu

    Chemical exposure often results in liver hypertrophy in animal tests, characterized by increased liver weight, hepatocellular hypertrophy, and/or cell proliferation. While most of these changes are considered adaptive responses, there is concern that they may be associated with carcinogenesis. In this study, we have employed a toxicogenomic approach using a logistic ridge regression model to identify genes responsible for liver hypertrophy and hypertrophic hepatocarcinogenesis and to develop a predictive model for assessing hypertrophy-inducing compounds. Logistic regression models have previously been used in the quantification of epidemiological risk factors. DNA microarray data from the Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System weremore » used to identify hypertrophy-related genes that are expressed differently in hypertrophy induced by carcinogens and non-carcinogens. Data were collected for 134 chemicals (72 non-hypertrophy-inducing chemicals, 27 hypertrophy-inducing non-carcinogenic chemicals, and 15 hypertrophy-inducing carcinogenic compounds). After applying logistic ridge regression analysis, 35 genes for liver hypertrophy (e.g., Acot1 and Abcc3) and 13 genes for hypertrophic hepatocarcinogenesis (e.g., Asns and Gpx2) were selected. The predictive models built using these genes were 94.8% and 82.7% accurate, respectively. Pathway analysis of the genes indicates that, aside from a xenobiotic metabolism-related pathway as an adaptive response for liver hypertrophy, amino acid biosynthesis and oxidative responses appear to be involved in hypertrophic hepatocarcinogenesis. Early detection and toxicogenomic characterization of liver hypertrophy using our models may be useful for predicting carcinogenesis. In addition, the identified genes provide novel insight into discrimination between adverse hypertrophy associated with carcinogenesis and adaptive hypertrophy in risk assessment. - Highlights: • Hypertrophy (H) and hypertrophic carcinogenesis (C) were studied by toxicogenomics. • Important genes for H and C were selected by logistic ridge regression analysis. • Amino acid biosynthesis and oxidative responses may be involved in C. • Predictive models for H and C provided 94.8% and 82.7% accuracy, respectively. • The identified genes could be useful for assessment of liver hypertrophy.« less

  2. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  3. Evaluation of logistic regression models and effect of covariates for case-control study in RNA-Seq analysis.

    PubMed

    Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L

    2017-02-06

    Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.

  4. Pointwise influence matrices for functional-response regression.

    PubMed

    Reiss, Philip T; Huang, Lei; Wu, Pei-Shien; Chen, Huaihou; Colcombe, Stan

    2017-12-01

    We extend the notion of an influence or hat matrix to regression with functional responses and scalar predictors. For responses depending linearly on a set of predictors, our definition is shown to reduce to the conventional influence matrix for linear models. The pointwise degrees of freedom, the trace of the pointwise influence matrix, are shown to have an adaptivity property that motivates a two-step bivariate smoother for modeling nonlinear dependence on a single predictor. This procedure adapts to varying complexity of the nonlinear model at different locations along the function, and thereby achieves better performance than competing tensor product smoothers in an analysis of the development of white matter microstructure in the brain. © 2017, The International Biometric Society.

  5. Analysis on the adaptive countermeasures to ecological management under changing environment in the Tarim River Basin, China

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Xue, Lianqing; Zhang, Luochen; Chen, Xinfang; Chi, Yixia

    2017-12-01

    This article aims to explore the adaptive utilization strategies of flow regime versus traditional practices in the context of climate change and human activities in the arid area. The study presents quantitative analysis of climatic and anthropogenic factors to streamflow alteration in the Tarim River Basin (TRB) using the Budyko method and adaptive utilization strategies to eco-hydrological regime by comparing the applicability between autoregressive moving average model (ARMA) model and combined regression model. Our results suggest that human activities played a dominant role in streamflow deduction in the mainstream with contribution of 120.7%~190.1%. While in the headstreams, climatic variables were the primary determinant of streamflow by 56.5~152.6% of the increase. The comparison revealed that combined regression model performed better than ARMA model with the qualified rate of 80.49~90.24%. Based on the forecasts of streamflow for different purposes, the adaptive utilization scheme of water flow is established from the perspective of time and space. Our study presents an effective water resources scheduling scheme for the ecological environment and provides references for ecological protection and water allocation in the arid area.

  6. On the Latent Regression Model of Item Response Theory. Research Report. ETS RR-07-12

    ERIC Educational Resources Information Center

    Antal, Tamás

    2007-01-01

    Full account of the latent regression model for the National Assessment of Educational Progress is given. The treatment includes derivation of the EM algorithm, Newton-Raphson method, and the asymptotic standard errors. The paper also features the use of the adaptive Gauss-Hermite numerical integration method as a basic tool to evaluate…

  7. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  8. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    PubMed

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.

  9. Human Language Technology: Opportunities and Challenges

    DTIC Science & Technology

    2005-01-01

    because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with

  10. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  11. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    NASA Astrophysics Data System (ADS)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  12. A New Predictive Model of Centerline Segregation in Continuous Cast Steel Slabs by Using Multivariate Adaptive Regression Splines Approach

    PubMed Central

    García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María

    2015-01-01

    The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.

  13. Pan evaporation modeling using six different heuristic computing methods in different climates of China

    NASA Astrophysics Data System (ADS)

    Wang, Lunche; Kisi, Ozgur; Zounemat-Kermani, Mohammad; Li, Hui

    2017-01-01

    Pan evaporation (Ep) plays important roles in agricultural water resources management. One of the basic challenges is modeling Ep using limited climatic parameters because there are a number of factors affecting the evaporation rate. This study investigated the abilities of six different soft computing methods, multi-layer perceptron (MLP), generalized regression neural network (GRNN), fuzzy genetic (FG), least square support vector machine (LSSVM), multivariate adaptive regression spline (MARS), adaptive neuro-fuzzy inference systems with grid partition (ANFIS-GP), and two regression methods, multiple linear regression (MLR) and Stephens and Stewart model (SS) in predicting monthly Ep. Long-term climatic data at various sites crossing a wide range of climates during 1961-2000 are used for model development and validation. The results showed that the models have different accuracies in different climates and the MLP model performed superior to the other models in predicting monthly Ep at most stations using local input combinations (for example, the MAE (mean absolute errors), RMSE (root mean square errors), and determination coefficient (R2) are 0.314 mm/day, 0.405 mm/day and 0.988, respectively for HEB station), while GRNN model performed better in Tibetan Plateau (MAE, RMSE and R2 are 0.459 mm/day, 0.592 mm/day and 0.932, respectively). The accuracies of above models ranked as: MLP, GRNN, LSSVM, FG, ANFIS-GP, MARS and MLR. The overall results indicated that the soft computing techniques generally performed better than the regression methods, but MLR and SS models can be more preferred at some climatic zones instead of complex nonlinear models, for example, the BJ (Beijing), CQ (Chongqing) and HK (Haikou) stations. Therefore, it can be concluded that Ep could be successfully predicted using above models in hydrological modeling studies.

  14. RRegrs: an R package for computer-aided model selection with multiple regression models.

    PubMed

    Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L

    2015-01-01

    Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  16. FPGA implementation of predictive degradation model for engine oil lifetime

    NASA Astrophysics Data System (ADS)

    Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.

    2018-03-01

    This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.

  17. A comparison of adaptive sampling designs and binary spatial models: A simulation study using a census of Bromus inermis

    USGS Publications Warehouse

    Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa

    2013-01-01

    Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.

  18. Modelling daily dissolved oxygen concentration using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    NASA Astrophysics Data System (ADS)

    Heddam, Salim; Kisi, Ozgur

    2018-04-01

    In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.

  19. A hybrid neural network model for noisy data regression.

    PubMed

    Lee, Eric W M; Lim, Chee Peng; Yuen, Richard K K; Lo, S M

    2004-04-01

    A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.

  20. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  1. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  2. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  3. Estimating suspended sediment load with multivariate adaptive regression spline, teaching-learning based optimization, and artificial bee colony models.

    PubMed

    Yilmaz, Banu; Aras, Egemen; Nacar, Sinan; Kankal, Murat

    2018-05-23

    The functional life of a dam is often determined by the rate of sediment delivery to its reservoir. Therefore, an accurate estimate of the sediment load in rivers with dams is essential for designing and predicting a dam's useful lifespan. The most credible method is direct measurements of sediment input, but this can be very costly and it cannot always be implemented at all gauging stations. In this study, we tested various regression models to estimate suspended sediment load (SSL) at two gauging stations on the Çoruh River in Turkey, including artificial bee colony (ABC), teaching-learning-based optimization algorithm (TLBO), and multivariate adaptive regression splines (MARS). These models were also compared with one another and with classical regression analyses (CRA). Streamflow values and previously collected data of SSL were used as model inputs with predicted SSL data as output. Two different training and testing dataset configurations were used to reinforce the model accuracy. For the MARS method, the root mean square error value was found to range between 35% and 39% for the test two gauging stations, which was lower than errors for other models. Error values were even lower (7% to 15%) using another dataset. Our results indicate that simultaneous measurements of streamflow with SSL provide the most effective parameter for obtaining accurate predictive models and that MARS is the most accurate model for predicting SSL. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Predicting Quantitative Traits With Regression Models for Dense Molecular Markers and Pedigree

    PubMed Central

    de los Campos, Gustavo; Naya, Hugo; Gianola, Daniel; Crossa, José; Legarra, Andrés; Manfredi, Eduardo; Weigel, Kent; Cotes, José Miguel

    2009-01-01

    The availability of genomewide dense markers brings opportunities and challenges to breeding programs. An important question concerns the ways in which dense markers and pedigrees, together with phenotypic records, should be used to arrive at predictions of genetic values for complex traits. If a large number of markers are included in a regression model, marker-specific shrinkage of regression coefficients may be needed. For this reason, the Bayesian least absolute shrinkage and selection operator (LASSO) (BL) appears to be an interesting approach for fitting marker effects in a regression model. This article adapts the BL to arrive at a regression model where markers, pedigrees, and covariates other than markers are considered jointly. Connections between BL and other marker-based regression models are discussed, and the sensitivity of BL with respect to the choice of prior distributions assigned to key parameters is evaluated using simulation. The proposed model was fitted to two data sets from wheat and mouse populations, and evaluated using cross-validation methods. Results indicate that inclusion of markers in the regression further improved the predictive ability of models. An R program that implements the proposed model is freely available. PMID:19293140

  5. A New Approach of Juvenile Age Estimation using Measurements of the Ilium and Multivariate Adaptive Regression Splines (MARS) Models for Better Age Prediction.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal

    2017-01-01

    Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.

  6. [Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].

    PubMed

    Vanegas, Jairo; Vásquez, Fabián

    Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. The Use of Multiple Regression and Trend Analysis to Understand Enrollment Fluctuations. AIR Forum 1979 Paper.

    ERIC Educational Resources Information Center

    Campbell, S. Duke; Greenberg, Barry

    The development of a predictive equation capable of explaining a significant percentage of enrollment variability at Florida International University is described. A model utilizing trend analysis and a multiple regression approach to enrollment forecasting was adapted to investigate enrollment dynamics at the university. Four independent…

  8. Adaptive variation in Pinus ponderosa from Intermountain regions. II. Middle Columbia River system

    Treesearch

    Gerald Rehfeldt

    1986-01-01

    Seedling populations were grown and compared in common environments. Statistical analyses detected genetic differences between populations for numerous traits reflecting growth potential and periodicity of shoot elongation. Multiple regression models described an adaptive landscape in which populations from low elevations have a high growth potential while those from...

  9. Idea Bank.

    ERIC Educational Resources Information Center

    Science Teacher, 1989

    1989-01-01

    Describes classroom activities and models for migration, mutation, and isolation; a diffusion model; Bernoulli's principle; sound in a vacuum; time regression mystery of DNA; seating chart lesson plan; algae mystery laboratory; water as mass; science fair; flipped book; making a cloud; wet mount slide; timer adaptation; thread slide model; and…

  10. Adaptive Modeling Procedure Selection by Data Perturbation.

    PubMed

    Zhang, Yongli; Shen, Xiaotong

    2015-10-01

    Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.

  11. A Developmental Sequence Model to University Adjustment of International Undergraduate Students

    ERIC Educational Resources Information Center

    Chavoshi, Saeid; Wintre, Maxine Gallander; Dentakos, Stella; Wright, Lorna

    2017-01-01

    The current study proposes a Developmental Sequence Model to University Adjustment and uses a multifaceted measure, including academic, social and psychological adjustment, to examine factors predictive of undergraduate international student adjustment. A hierarchic regression model is carried out on the Student Adaptation to College Questionnaire…

  12. Predicting Potential Changes in Suitable Habitat and Distribution by 2100 for Tree Species of the Eastern United States

    Treesearch

    Louis R Iverson; Anantha M. Prasad; Mark W. Schwartz; Mark W. Schwartz

    2005-01-01

    We predict current distribution and abundance for tree species present in eastern North America, and subsequently estimate potential suitable habitat for those species under a changed climate with 2 x CO2. We used a series of statistical models (i.e., Regression Tree Analysis (RTA), Multivariate Adaptive Regression Splines (MARS), Bagging Trees (...

  13. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  14. Electricity Consumption in the Industrial Sector of Jordan: Application of Multivariate Linear Regression and Adaptive Neuro-Fuzzy Techniques

    NASA Astrophysics Data System (ADS)

    Samhouri, M.; Al-Ghandoor, A.; Fouad, R. H.

    2009-08-01

    In this study two techniques, for modeling electricity consumption of the Jordanian industrial sector, are presented: (i) multivariate linear regression and (ii) neuro-fuzzy models. Electricity consumption is modeled as function of different variables such as number of establishments, number of employees, electricity tariff, prevailing fuel prices, production outputs, capacity utilizations, and structural effects. It was found that industrial production and capacity utilization are the most important variables that have significant effect on future electrical power demand. The results showed that both the multivariate linear regression and neuro-fuzzy models are generally comparable and can be used adequately to simulate industrial electricity consumption. However, comparison that is based on the square root average squared error of data suggests that the neuro-fuzzy model performs slightly better for future prediction of electricity consumption than the multivariate linear regression model. Such results are in full agreement with similar work, using different methods, for other countries.

  15. Restoration of Monotonicity Respecting in Dynamic Regression

    PubMed Central

    Huang, Yijian

    2017-01-01

    Dynamic regression models, including the quantile regression model and Aalen’s additive hazards model, are widely adopted to investigate evolving covariate effects. Yet lack of monotonicity respecting with standard estimation procedures remains an outstanding issue. Advances have recently been made, but none provides a complete resolution. In this article, we propose a novel adaptive interpolation method to restore monotonicity respecting, by successively identifying and then interpolating nearest monotonicity-respecting points of an original estimator. Under mild regularity conditions, the resulting regression coefficient estimator is shown to be asymptotically equivalent to the original. Our numerical studies have demonstrated that the proposed estimator is much more smooth and may have better finite-sample efficiency than the original as well as, when available as only in special cases, other competing monotonicity-respecting estimators. Illustration with a clinical study is provided. PMID:29430068

  16. Integrating Growth Variability of the Ilium, Fifth Lumbar Vertebra, and Clavicle with Multivariate Adaptive Regression Splines Models for Subadult Age Estimation.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Telmon, Norbert; Chaumoitre, Kathia; Adalian, Pascal

    2018-05-31

    Subadult age estimation should rely on sampling and statistical protocols capturing development variability for more accurate age estimates. In this perspective, measurements were taken on the fifth lumbar vertebrae and/or clavicles of 534 French males and females aged 0-19 years and the ilia of 244 males and females aged 0-12 years. These variables were fitted in nonparametric multivariate adaptive regression splines (MARS) models with 95% prediction intervals (PIs) of age. The models were tested on two independent samples from Marseille and the Luis Lopes reference collection from Lisbon. Models using ilium width and module, maximum clavicle length, and lateral vertebral body heights were more than 92% accurate. Precision was lower for postpubertal individuals. Integrating punctual nonlinearities of the relationship between age and the variables and dynamic prediction intervals incorporated the normal increase in interindividual growth variability (heteroscedasticity of variance) with age for more biologically accurate predictions. © 2018 American Academy of Forensic Sciences.

  17. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  18. Adaptive Nonparametric Kinematic Modeling of Concentric Tube Robots.

    PubMed

    Fagogenis, Georgios; Bergeles, Christos; Dupont, Pierre E

    2016-10-01

    Concentric tube robots comprise telescopic precurved elastic tubes. The robot's tip and shape are controlled via relative tube motions, i.e. tube rotations and translations. Non-linear interactions between the tubes, e.g. friction and torsion, as well as uncertainty in the physical properties of the tubes themselves, e.g. the Young's modulus, curvature, or stiffness, hinder accurate kinematic modelling. In this paper, we present a machine-learning-based methodology for kinematic modelling of concentric tube robots and in situ model adaptation. Our approach is based on Locally Weighted Projection Regression (LWPR). The model comprises an ensemble of linear models, each of which locally approximates the original complex kinematic relation. LWPR can accommodate for model deviations by adjusting the respective local models at run-time, resulting in an adaptive kinematics framework. We evaluated our approach on data gathered from a three-tube robot, and report high accuracy across the robot's configuration space.

  19. Modelling lecturer performance index of private university in Tulungagung by using survival analysis with multivariate adaptive regression spline

    NASA Astrophysics Data System (ADS)

    Hasyim, M.; Prastyo, D. D.

    2018-03-01

    Survival analysis performs relationship between independent variables and survival time as dependent variable. In fact, not all survival data can be recorded completely by any reasons. In such situation, the data is called censored data. Moreover, several model for survival analysis requires assumptions. One of the approaches in survival analysis is nonparametric that gives more relax assumption. In this research, the nonparametric approach that is employed is Multivariate Regression Adaptive Spline (MARS). This study is aimed to measure the performance of private university’s lecturer. The survival time in this study is duration needed by lecturer to obtain their professional certificate. The results show that research activities is a significant factor along with developing courses material, good publication in international or national journal, and activities in research collaboration.

  20. Polynomial order selection in random regression models via penalizing adaptively the likelihood.

    PubMed

    Corrales, J D; Munilla, S; Cantet, R J C

    2015-08-01

    Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates. © 2015 Blackwell Verlag GmbH.

  1. Ensemble habitat mapping of invasive plant species

    USGS Publications Warehouse

    Stohlgren, T.J.; Ma, P.; Kumar, S.; Rocca, M.; Morisette, J.T.; Jarnevich, C.S.; Benson, N.

    2010-01-01

    Ensemble species distribution models combine the strengths of several species environmental matching models, while minimizing the weakness of any one model. Ensemble models may be particularly useful in risk analysis of recently arrived, harmful invasive species because species may not yet have spread to all suitable habitats, leaving species-environment relationships difficult to determine. We tested five individual models (logistic regression, boosted regression trees, random forest, multivariate adaptive regression splines (MARS), and maximum entropy model or Maxent) and ensemble modeling for selected nonnative plant species in Yellowstone and Grand Teton National Parks, Wyoming; Sequoia and Kings Canyon National Parks, California, and areas of interior Alaska. The models are based on field data provided by the park staffs, combined with topographic, climatic, and vegetation predictors derived from satellite data. For the four invasive plant species tested, ensemble models were the only models that ranked in the top three models for both field validation and test data. Ensemble models may be more robust than individual species-environment matching models for risk analysis. ?? 2010 Society for Risk Analysis.

  2. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  3. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

    ERIC Educational Resources Information Center

    Baschera, Gian-Marco; Gross, Markus

    2010-01-01

    We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

  4. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  5. Adaptive kernel regression for freehand 3D ultrasound reconstruction

    NASA Astrophysics Data System (ADS)

    Alshalalfah, Abdel-Latif; Daoud, Mohammad I.; Al-Najar, Mahasen

    2017-03-01

    Freehand three-dimensional (3D) ultrasound imaging enables low-cost and flexible 3D scanning of arbitrary-shaped organs, where the operator can freely move a two-dimensional (2D) ultrasound probe to acquire a sequence of tracked cross-sectional images of the anatomy. Often, the acquired 2D ultrasound images are irregularly and sparsely distributed in the 3D space. Several 3D reconstruction algorithms have been proposed to synthesize 3D ultrasound volumes based on the acquired 2D images. A challenging task during the reconstruction process is to preserve the texture patterns in the synthesized volume and ensure that all gaps in the volume are correctly filled. This paper presents an adaptive kernel regression algorithm that can effectively reconstruct high-quality freehand 3D ultrasound volumes. The algorithm employs a kernel regression model that enables nonparametric interpolation of the voxel gray-level values. The kernel size of the regression model is adaptively adjusted based on the characteristics of the voxel that is being interpolated. In particular, when the algorithm is employed to interpolate a voxel located in a region with dense ultrasound data samples, the size of the kernel is reduced to preserve the texture patterns. On the other hand, the size of the kernel is increased in areas that include large gaps to enable effective gap filling. The performance of the proposed algorithm was compared with seven previous interpolation approaches by synthesizing freehand 3D ultrasound volumes of a benign breast tumor. The experimental results show that the proposed algorithm outperforms the other interpolation approaches.

  6. Predicting recreational water quality advisories: A comparison of statistical methods

    USGS Publications Warehouse

    Brooks, Wesley R.; Corsi, Steven R.; Fienen, Michael N.; Carvin, Rebecca B.

    2016-01-01

    Epidemiological studies indicate that fecal indicator bacteria (FIB) in beach water are associated with illnesses among people having contact with the water. In order to mitigate public health impacts, many beaches are posted with an advisory when the concentration of FIB exceeds a beach action value. The most commonly used method of measuring FIB concentration takes 18–24 h before returning a result. In order to avoid the 24 h lag, it has become common to ”nowcast” the FIB concentration using statistical regressions on environmental surrogate variables. Most commonly, nowcast models are estimated using ordinary least squares regression, but other regression methods from the statistical and machine learning literature are sometimes used. This study compares 14 regression methods across 7 Wisconsin beaches to identify which consistently produces the most accurate predictions. A random forest model is identified as the most accurate, followed by multiple regression fit using the adaptive LASSO.

  7. The Therapeutic Effect of Anti-HER2/neu Antibody Depends on Both Innate and Adaptive Immunity

    PubMed Central

    Park, SaeGwang; Jiang, Zhujun; Mortenson, Eric D.; Deng, Liufu; Radkevich-Brown, Olga; Yang, Xuanming; Sattar, Husain; Wang, Yang; Brown, Nicholas K.; Greene, Mark; Liu, Yang; Tang, Jie; Wang, Shengdian; Fu, Yang-Xin

    2010-01-01

    SUMMARY Anti-HER2/neu antibody therapy is reported to mediate tumor regression by interrupting oncogenic signals and/or inducing FcR-mediated cytotoxicity. Here, we demonstrate that the mechanisms of tumor regression by this therapy also require the adaptive immune response. Activation of innate immunity and T cells, initiated by antibody treatment, was necessary. Intriguingly, the addition of chemotherapeutic drugs, while capable of enhancing the reduction of tumor burden, could abrogate antibody-initiated immunity leading to decreased resistance to re-challenge or earlier relapse. Increased influx of both innate and adaptive immune cells into the tumor microenvironment by a selected immunotherapy further enhanced subsequent antibody-induced immunity, leading to increased tumor eradication and resistance to re-challenge. Therefore, this study proposes a model and strategy for anti-HER2/neu antibody-mediated tumor clearance. PMID:20708157

  8. Genomic Prediction Accounting for Genotype by Environment Interaction Offers an Effective Framework for Breeding Simultaneously for Adaptation to an Abiotic Stress and Performance Under Normal Cropping Conditions in Rice.

    PubMed

    Ben Hassen, Manel; Bartholomé, Jérôme; Valè, Giampiero; Cao, Tuong-Vi; Ahmadi, Nourollah

    2018-05-09

    Developing rice varieties adapted to alternate wetting and drying water management is crucial for the sustainability of irrigated rice cropping systems. Here we report the first study exploring the feasibility of breeding rice for adaptation to alternate wetting and drying using genomic prediction methods that account for genotype by environment interactions. Two breeding populations (a reference panel of 284 accessions and a progeny population of 97 advanced lines) were evaluated under alternate wetting and drying and continuous flooding management systems. The predictive ability of genomic prediction for response variables (index of relative performance and the slope of the joint regression) and for multi-environment genomic prediction models were compared. For the three traits considered (days to flowering, panicle weight and nitrogen-balance index), significant genotype by environment interactions were observed in both populations. In cross validation, predictive ability for the index was on average lower (0.31) than that of the slope of the joint regression (0.64) whatever the trait considered. Similar results were found for progeny validation. Both cross-validation and progeny validation experiments showed that the performance of multi-environment models predicting unobserved phenotypes of untested entrees was similar to the performance of single environment models with differences in predictive ability ranging from -6% to 4% depending on the trait and on the statistical model concerned. The predictive ability of multi-environment models predicting unobserved phenotypes of entrees evaluated under both water management systems outperformed single environment models by an average of 30%. Practical implications for breeding rice for adaptation to alternate wetting and drying system are discussed. Copyright © 2018, G3: Genes, Genomes, Genetics.

  9. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age.

    PubMed

    Wilke, Marko

    2018-02-01

    This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.

  10. Estimation of soil cation exchange capacity using Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS)

    NASA Astrophysics Data System (ADS)

    Emamgolizadeh, S.; Bateni, S. M.; Shahsavani, D.; Ashrafi, T.; Ghorbani, H.

    2015-10-01

    The soil cation exchange capacity (CEC) is one of the main soil chemical properties, which is required in various fields such as environmental and agricultural engineering as well as soil science. In situ measurement of CEC is time consuming and costly. Hence, numerous studies have used traditional regression-based techniques to estimate CEC from more easily measurable soil parameters (e.g., soil texture, organic matter (OM), and pH). However, these models may not be able to adequately capture the complex and highly nonlinear relationship between CEC and its influential soil variables. In this study, Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS) were employed to estimate CEC from more readily measurable soil physical and chemical variables (e.g., OM, clay, and pH) by developing functional relations. The GEP- and MARS-based functional relations were tested at two field sites in Iran. Results showed that GEP and MARS can provide reliable estimates of CEC. Also, it was found that the MARS model (with root-mean-square-error (RMSE) of 0.318 Cmol+ kg-1 and correlation coefficient (R2) of 0.864) generated slightly better results than the GEP model (with RMSE of 0.270 Cmol+ kg-1 and R2 of 0.807). The performance of GEP and MARS models was compared with two existing approaches, namely artificial neural network (ANN) and multiple linear regression (MLR). The comparison indicated that MARS and GEP outperformed the MLP model, but they did not perform as good as ANN. Finally, a sensitivity analysis was conducted to determine the most and the least influential variables affecting CEC. It was found that OM and pH have the most and least significant effect on CEC, respectively.

  11. Bias in logistic regression due to imperfect diagnostic test results and practical correction approaches.

    PubMed

    Valle, Denis; Lima, Joanna M Tucker; Millar, Justin; Amratia, Punam; Haque, Ubydul

    2015-11-04

    Logistic regression is a statistical model widely used in cross-sectional and cohort studies to identify and quantify the effects of potential disease risk factors. However, the impact of imperfect tests on adjusted odds ratios (and thus on the identification of risk factors) is under-appreciated. The purpose of this article is to draw attention to the problem associated with modelling imperfect diagnostic tests, and propose simple Bayesian models to adequately address this issue. A systematic literature review was conducted to determine the proportion of malaria studies that appropriately accounted for false-negatives/false-positives in a logistic regression setting. Inference from the standard logistic regression was also compared with that from three proposed Bayesian models using simulations and malaria data from the western Brazilian Amazon. A systematic literature review suggests that malaria epidemiologists are largely unaware of the problem of using logistic regression to model imperfect diagnostic test results. Simulation results reveal that statistical inference can be substantially improved when using the proposed Bayesian models versus the standard logistic regression. Finally, analysis of original malaria data with one of the proposed Bayesian models reveals that microscopy sensitivity is strongly influenced by how long people have lived in the study region, and an important risk factor (i.e., participation in forest extractivism) is identified that would have been missed by standard logistic regression. Given the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic tests, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code that can be readily adapted to WinBUGS is provided, enabling straightforward implementation of the proposed Bayesian models.

  12. Convex Regression with Interpretable Sharp Partitions

    PubMed Central

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120

  13. STEP and STEPSPL: Computer programs for aerodynamic model structure determination and parameter estimation

    NASA Technical Reports Server (NTRS)

    Batterson, J. G.

    1986-01-01

    The successful parametric modeling of the aerodynamics for an airplane operating at high angles of attack or sideslip is performed in two phases. First the aerodynamic model structure must be determined and second the associated aerodynamic parameters (stability and control derivatives) must be estimated for that model. The purpose of this paper is to document two versions of a stepwise regression computer program which were developed for the determination of airplane aerodynamic model structure and to provide two examples of their use on computer generated data. References are provided for the application of the programs to real flight data. The two computer programs that are the subject of this report, STEP and STEPSPL, are written in FORTRAN IV (ANSI l966) compatible with a CDC FTN4 compiler. Both programs are adaptations of a standard forward stepwise regression algorithm. The purpose of the adaptation is to facilitate the selection of a adequate mathematical model of the aerodynamic force and moment coefficients of an airplane from flight test data. The major difference between STEP and STEPSPL is in the basis for the model. The basis for the model in STEP is the standard polynomial Taylor's series expansion of the aerodynamic function about some steady-state trim condition. Program STEPSPL utilizes a set of spline basis functions.

  14. Robust face alignment under occlusion via regional predictive power estimation.

    PubMed

    Heng Yang; Xuming He; Xuhui Jia; Patras, Ioannis

    2015-08-01

    Face alignment has been well studied in recent years, however, when a face alignment model is applied on facial images with heavy partial occlusion, the performance deteriorates significantly. In this paper, instead of training an occlusion-aware model with visibility annotation, we address this issue via a model adaptation scheme that uses the result of a local regression forest (RF) voting method. In the proposed scheme, the consistency of the votes of the local RF in each of several oversegmented regions is used to determine the reliability of predicting the location of the facial landmarks. The latter is what we call regional predictive power (RPP). Subsequently, we adapt a holistic voting method (cascaded pose regression based on random ferns) by putting weights on the votes of each fern according to the RPP of the regions used in the fern tests. The proposed method shows superior performance over existing face alignment models in the most challenging data sets (COFW and 300-W). Moreover, it can also estimate with high accuracy (72.4% overlap ratio) which image areas belong to the face or nonface objects, on the heavily occluded images of the COFW data set, without explicit occlusion modeling.

  15. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    NASA Astrophysics Data System (ADS)

    Storm, Emma; Weniger, Christoph; Calore, Francesca

    2017-08-01

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.

  16. Adaptation can explain evidence for encoding of probabilistic information in macaque inferior temporal cortex.

    PubMed

    Vinken, Kasper; Vogels, Rufin

    2017-11-20

    In predictive coding theory, the brain is conceptualized as a prediction machine that constantly constructs and updates expectations of the sensory environment [1]. In the context of this theory, Bell et al.[2] recently studied the effect of the probability of task-relevant stimuli on the activity of macaque inferior temporal (IT) neurons and observed a reduced population response to expected faces in face-selective neurons. They concluded that "IT neurons encode long-term, latent probabilistic information about stimulus occurrence", supporting predictive coding. They manipulated expectation by the frequency of face versus fruit stimuli in blocks of trials. With such a design, stimulus repetition is confounded with expectation. As previous studies showed that IT neurons decrease their response with repetition [3], such adaptation (or repetition suppression), instead of expectation suppression as assumed by the authors, could explain their effects. The authors attempted to control for this alternative interpretation with a multiple regression approach. Here we show by using simulation that adaptation can still masquerade as expectation effects reported in [2]. Further, the results from the regression model used for most analyses cannot be trusted, because the model is not uniquely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Prediction of energy expenditure and physical activity in preschoolers

    USDA-ARS?s Scientific Manuscript database

    Accurate, nonintrusive, and feasible methods are needed to predict energy expenditure (EE) and physical activity (PA) levels in preschoolers. Herein, we validated cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on accelerometry and heart rate (HR) ...

  18. Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...

  19. Adaptation of a weighted regression approach to evaluate water quality trends in anestuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...

  20. Computing mammographic density from a multiple regression model constructed with image-acquisition parameters from a full-field digital mammographic unit

    PubMed Central

    Lu, Lee-Jane W.; Nishino, Thomas K.; Khamapirad, Tuenchit; Grady, James J; Leonard, Morton H.; Brunder, Donald G.

    2009-01-01

    Breast density (the percentage of fibroglandular tissue in the breast) has been suggested to be a useful surrogate marker for breast cancer risk. It is conventionally measured using screen-film mammographic images by a labor intensive histogram segmentation method (HSM). We have adapted and modified the HSM for measuring breast density from raw digital mammograms acquired by full-field digital mammography. Multiple regression model analyses showed that many of the instrument parameters for acquiring the screening mammograms (e.g. breast compression thickness, radiological thickness, radiation dose, compression force, etc) and image pixel intensity statistics of the imaged breasts were strong predictors of the observed threshold values (model R2=0.93) and %density (R2=0.84). The intra-class correlation coefficient of the %-density for duplicate images was estimated to be 0.80, using the regression model-derived threshold values, and 0.94 if estimated directly from the parameter estimates of the %-density prediction regression model. Therefore, with additional research, these mathematical models could be used to compute breast density objectively, automatically bypassing the HSM step, and could greatly facilitate breast cancer research studies. PMID:17671343

  1. Multiple role adaptation among women who have children and re-enter nursing school in Taiwan.

    PubMed

    Lin, Li-Ling

    2005-03-01

    This study assessed multiple role adaptation within maternal and student roles among female RNs who had children and returned to school for baccalaureate degrees in Taiwan. Using Roy's Adaptation Model as the theoretical framework, relationships were explored among demographic (number of children, age of youngest child, employment status), physical (sleep quality, health perception, activity), and psychosocial factors (self-identity, role expectation, role involvement, social support) and multiple role adaptation (role accumulation). The sample included 118 mother-students who had at least one child younger than age 18 and who were studying in nursing programs in Taiwan. The highest correlation was found between activity and role accumulation followed by significant correlations between sleep quality, health perception, maternal role expectation, and age of youngest child and role accumulation. In regression analyses, the complete model explained 46% of the variance in role accumulation. Implications for education and future research are identified.

  2. Cavefish and the basis for eye loss.

    PubMed

    Krishnan, Jaya; Rohner, Nicolas

    2017-02-05

    Animals have colonized the entire world from rather moderate to the harshest environments, some of these so extreme that only few animals are able to survive. Cave environments present such a challenge and obligate cave animals have adapted to perpetual darkness by evolving a multitude of traits. The most common and most studied cave characteristics are the regression of eyes and the overall reduction in pigmentation. Studying these traits can provide important insights into how evolutionary forces drive convergent and regressive adaptation. The blind Mexican cavefish (Astyanax mexicanus) has emerged as a useful model to study cave evolution owing to the availability of genetic and genomic resources, and the amenability of embryonic development as the different populations remain fertile with each other. In this review, we give an overview of our current knowledge underlying the process of regressive and convergent evolution using eye degeneration in cavefish as an example.This article is part of the themed issue 'Evo-devo in the genomics era, and the origins of morphological diversity'. © 2016 The Author(s).

  3. Cavefish and the basis for eye loss

    PubMed Central

    Krishnan, Jaya

    2017-01-01

    Animals have colonized the entire world from rather moderate to the harshest environments, some of these so extreme that only few animals are able to survive. Cave environments present such a challenge and obligate cave animals have adapted to perpetual darkness by evolving a multitude of traits. The most common and most studied cave characteristics are the regression of eyes and the overall reduction in pigmentation. Studying these traits can provide important insights into how evolutionary forces drive convergent and regressive adaptation. The blind Mexican cavefish (Astyanax mexicanus) has emerged as a useful model to study cave evolution owing to the availability of genetic and genomic resources, and the amenability of embryonic development as the different populations remain fertile with each other. In this review, we give an overview of our current knowledge underlying the process of regressive and convergent evolution using eye degeneration in cavefish as an example. This article is part of the themed issue ‘Evo-devo in the genomics era, and the origins of morphological diversity’. PMID:27994128

  4. “Smooth” Semiparametric Regression Analysis for Arbitrarily Censored Time-to-Event Data

    PubMed Central

    Zhang, Min; Davidian, Marie

    2008-01-01

    Summary A general framework for regression analysis of time-to-event data subject to arbitrary patterns of censoring is proposed. The approach is relevant when the analyst is willing to assume that distributions governing model components that are ordinarily left unspecified in popular semiparametric regression models, such as the baseline hazard function in the proportional hazards model, have densities satisfying mild “smoothness” conditions. Densities are approximated by a truncated series expansion that, for fixed degree of truncation, results in a “parametric” representation, which makes likelihood-based inference coupled with adaptive choice of the degree of truncation, and hence flexibility of the model, computationally and conceptually straightforward with data subject to any pattern of censoring. The formulation allows popular models, such as the proportional hazards, proportional odds, and accelerated failure time models, to be placed in a common framework; provides a principled basis for choosing among them; and renders useful extensions of the models straightforward. The utility and performance of the methods are demonstrated via simulations and by application to data from time-to-event studies. PMID:17970813

  5. Lessons learned: Optimization of a murine small bowel resection model

    PubMed Central

    Taylor, Janice A.; Martin, Colin A.; Nair, Rajalakshmi; Guo, Jun; Erwin, Christopher R.; Warner, Brad W.

    2008-01-01

    Background/Purpose Central to the use of murine models of disease is the ability to derive reproducible data. The purpose of this study was to determine factors contributing to variability in our murine model of small bowel resection (SBR). Methods Male C57Bl/6 mice were randomized to sham or 50% SBR. The effect of housing type (pathogen-free versus standard housing), nutrition (reconstituted powder versus tube feeding formulation), and correlates of intestinal morphology with gene expression changes were investigated Multiple linear regression modeling or one-way ANOVA was used for data analysis. Results Pathogen-free mice had significantly shorter ileal villi at baseline and demonstrated greater villus growth after SBR compared to mice housed in standard rooms. Food type did not affect adaptation. Gene expression changes were more consistent and significant in isolated crypt cells that demonstrated adaptive growth when compared with crypts that did not deepen after SBR. Conclusion Maintenance of mice in pathogen-free conditions and restricting gene expression analysis to individual animals exhibiting morphologic adaptation enhances sensitivity and specificity of data derived from this model. These refinements will minimize experimental variability and lead to improved understanding of the complex process of intestinal adaptation. PMID:18558176

  6. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storm, Emma; Weniger, Christoph; Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that aremore » motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.« less

  7. Novel forecasting approaches using combination of machine learning and statistical models for flood susceptibility mapping.

    PubMed

    Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah

    2018-07-01

    In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Estimation of streamflow, base flow, and nitrate-nitrogen loads in Iowa using multiple linear regression models

    USGS Publications Warehouse

    Schilling, K.E.; Wolter, C.F.

    2005-01-01

    Nineteen variables, including precipitation, soils and geology, land use, and basin morphologic characteristics, were evaluated to develop Iowa regression models to predict total streamflow (Q), base flow (Qb), storm flow (Qs) and base flow percentage (%Qb) in gauged and ungauged watersheds in the state. Discharge records from a set of 33 watersheds across the state for the 1980 to 2000 period were separated into Qb and Qs. Multiple linear regression found that 75.5 percent of long term average Q was explained by rainfall, sand content, and row crop percentage variables, whereas 88.5 percent of Qb was explained by these three variables plus permeability and floodplain area variables. Qs was explained by average rainfall and %Qb was a function of row crop percentage, permeability, and basin slope variables. Regional regression models developed for long term average Q and Qb were adapted to annual rainfall and showed good correlation between measured and predicted values. Combining the regression model for Q with an estimate of mean annual nitrate concentration, a map of potential nitrate loads in the state was produced. Results from this study have important implications for understanding geomorphic and land use controls on streamflow and base flow in Iowa watersheds and similar agriculture dominated watersheds in the glaciated Midwest. (JAWRA) (Copyright ?? 2005).

  9. Modeling thermal sensation in a Mediterranean climate—a comparison of linear and ordinal models

    NASA Astrophysics Data System (ADS)

    Pantavou, Katerina; Lykoudis, Spyridon

    2014-08-01

    A simple thermo-physiological model of outdoor thermal sensation adjusted with psychological factors is developed aiming to predict thermal sensation in Mediterranean climates. Microclimatic measurements simultaneously with interviews on personal and psychological conditions were carried out in a square, a street canyon and a coastal location of the greater urban area of Athens, Greece. Multiple linear and ordinal regression were applied in order to estimate thermal sensation making allowance for all the recorded parameters or specific, empirically selected, subsets producing so-called extensive and empirical models, respectively. Meteorological, thermo-physiological and overall models - considering psychological factors as well - were developed. Predictions were improved when personal and psychological factors were taken into account as compared to meteorological models. The model based on ordinal regression reproduced extreme values of thermal sensation vote more adequately than the linear regression one, while the empirical model produced satisfactory results in relation to the extensive model. The effects of adaptation and expectation on thermal sensation vote were introduced in the models by means of the exposure time, season and preference related to air temperature and irradiation. The assessment of thermal sensation could be a useful criterion in decision making regarding public health, outdoor spaces planning and tourism.

  10. Early Vocabulary and Gestures in Estonian Children

    ERIC Educational Resources Information Center

    Schults, Astra; Tulviste, Tiia; Konstabel, Kenn

    2012-01-01

    Parents of 592 children between the age of 0 ; 8 and 1 ; 4 completed the Estonian adaptation of the MacArthur-Bates Communicative Development Inventory (ECDI Infant Form). The relationships between comprehension and production of different categories of words and gestures were examined. According to the results of regression modelling the…

  11. An Investigation of Multivariate Adaptive Regression Splines for Modeling and Analysis of Univariate and Semi-Multivariate Time Series Systems

    DTIC Science & Technology

    1991-09-01

    However, there is no guarantee that this would work; for instance if the data were generated by an ARCH model (Tong, 1990 pp. 116-117) then a simple...Hill, R., Griffiths, W., Lutkepohl, H., and Lee, T., Introduction to the Theory and Practice of Econometrics , 2th ed., Wiley, 1985. Kendall, M., Stuart

  12. Adaptation of health care for migrants: whose responsibility?

    PubMed

    Dauvrin, Marie; Lorant, Vincent

    2014-07-08

    In a context of increasing ethnic diversity, culturally competent strategies have been recommended to improve care quality and access to health care for ethnic minorities and migrants; their implementation by health professionals, however, has remained patchy. Most programs of cultural competence assume that health professionals accept that they have a responsibility to adapt to migrants, but this assumption has often remained at the level of theory. In this paper, we surveyed health professionals' views on their responsibility to adapt. Five hundred-and-sixty-nine health professionals from twenty-four inpatient and outpatient health services were selected according to their geographic location. All health care professionals were requested to complete a questionnaire about who should adapt to ethnic diversity: health professionals or patients. After a factorial analysis to identify the underlying responsibility dimensions, we performed a multilevel regression model in order to investigate individual and service covariates of responsibility attribution. Three dimensions emerged from the factor analysis: responsibility for the adaptation of communication, responsibility for the adaptation to the negotiation of values, and responsibility for the adaptation to health beliefs. Our results showed that the sense of responsibility for the adaptation of health care depended on the nature of the adaptation required: when the adaptation directly concerned communication with the patient, health professionals declared that they should be the ones to adapt; in relation to cultural preferences, however, the responsibility felt on the patient's shoulders. Most respondents were unclear in relation to adaptation to health beliefs. Regression indicated that being Belgian, not being a physician, and working in a primary-care service were associated with placing the burden of responsibility on the patient. Health care professionals do not consider it to be their responsibility to adapt to ethnic diversity. If health professionals do not feel a responsibility to adapt, they are less likely to be involved in culturally competent health care.

  13. Adaptation of health care for migrants: whose responsibility?

    PubMed Central

    2014-01-01

    Background In a context of increasing ethnic diversity, culturally competent strategies have been recommended to improve care quality and access to health care for ethnic minorities and migrants; their implementation by health professionals, however, has remained patchy. Most programs of cultural competence assume that health professionals accept that they have a responsibility to adapt to migrants, but this assumption has often remained at the level of theory. In this paper, we surveyed health professionals’ views on their responsibility to adapt. Methods Five hundred-and-sixty-nine health professionals from twenty-four inpatient and outpatient health services were selected according to their geographic location. All health care professionals were requested to complete a questionnaire about who should adapt to ethnic diversity: health professionals or patients. After a factorial analysis to identify the underlying responsibility dimensions, we performed a multilevel regression model in order to investigate individual and service covariates of responsibility attribution. Results Three dimensions emerged from the factor analysis: responsibility for the adaptation of communication, responsibility for the adaptation to the negotiation of values, and responsibility for the adaptation to health beliefs. Our results showed that the sense of responsibility for the adaptation of health care depended on the nature of the adaptation required: when the adaptation directly concerned communication with the patient, health professionals declared that they should be the ones to adapt; in relation to cultural preferences, however, the responsibility felt on the patient’s shoulders. Most respondents were unclear in relation to adaptation to health beliefs. Regression indicated that being Belgian, not being a physician, and working in a primary-care service were associated with placing the burden of responsibility on the patient. Conclusions Health care professionals do not consider it to be their responsibility to adapt to ethnic diversity. If health professionals do not feel a responsibility to adapt, they are less likely to be involved in culturally competent health care. PMID:25005021

  14. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  15. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  16. Inference for multivariate regression model based on multiply imputed synthetic data generated via posterior predictive sampling

    NASA Astrophysics Data System (ADS)

    Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.

    2017-06-01

    The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.

  17. Estimating Soil Cation Exchange Capacity from Soil Physical and Chemical Properties

    NASA Astrophysics Data System (ADS)

    Bateni, S. M.; Emamgholizadeh, S.; Shahsavani, D.

    2014-12-01

    The soil Cation Exchange Capacity (CEC) is an important soil characteristic that has many applications in soil science and environmental studies. For example, CEC influences soil fertility by controlling the exchange of ions in the soil. Measurement of CEC is costly and difficult. Consequently, several studies attempted to obtain CEC from readily measurable soil physical and chemical properties such as soil pH, organic matter, soil texture, bulk density, and particle size distribution. These studies have often used multiple regression or artificial neural network models. Regression-based models cannot capture the intricate relationship between CEC and soil physical and chemical attributes and provide inaccurate CEC estimates. Although neural network models perform better than regression methods, they act like a black-box and cannot generate an explicit expression for retrieval of CEC from soil properties. In a departure with regression and neural network models, this study uses Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS) to estimate CEC from easily measurable soil variables such as clay, pH, and OM. CEC estimates from GEP and MARS are compared with measurements at two field sites in Iran. Results show that GEP and MARS can estimate CEC accurately. Also, the MARS model performs slightly better than GEP. Finally, a sensitivity test indicates that organic matter and pH have respectively the least and the most significant impact on CEC.

  18. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  19. Area-to-point regression kriging for pan-sharpening

    NASA Astrophysics Data System (ADS)

    Wang, Qunming; Shi, Wenzhong; Atkinson, Peter M.

    2016-04-01

    Pan-sharpening is a technique to combine the fine spatial resolution panchromatic (PAN) band with the coarse spatial resolution multispectral bands of the same satellite to create a fine spatial resolution multispectral image. In this paper, area-to-point regression kriging (ATPRK) is proposed for pan-sharpening. ATPRK considers the PAN band as the covariate. Moreover, ATPRK is extended with a local approach, called adaptive ATPRK (AATPRK), which fits a regression model using a local, non-stationary scheme such that the regression coefficients change across the image. The two geostatistical approaches, ATPRK and AATPRK, were compared to the 13 state-of-the-art pan-sharpening approaches summarized in Vivone et al. (2015) in experiments on three separate datasets. ATPRK and AATPRK produced more accurate pan-sharpened images than the 13 benchmark algorithms in all three experiments. Unlike the benchmark algorithms, the two geostatistical solutions precisely preserved the spectral properties of the original coarse data. Furthermore, ATPRK can be enhanced by a local scheme in AATRPK, in cases where the residuals from a global regression model are such that their spatial character varies locally.

  20. Prediction failure of a wolf landscape model

    USGS Publications Warehouse

    Mech, L.D.

    2006-01-01

    I compared 101 wolf (Canis lupus) pack territories formed in Wisconsin during 1993-2004 to the logistic regression predictive model of Mladenoff et al. (1995, 1997, 1999). Of these, 60% were located in putative habitat suitabilities 50% remained unoccupied by known packs after 24 years of recolonization. This model was a poor predictor of wolf re-colonizing locations in Wisconsin, apparently because it failed to consider the adaptability of wolves. Such models should be used cautiously in wolf-management or restoration plans.

  1. The advantage of flexible neuronal tunings in neural network models for motor learning

    PubMed Central

    Marongelli, Ellisha N.; Thoroughman, Kurt A.

    2013-01-01

    Human motor adaptation to novel environments is often modeled by a basis function network that transforms desired movement properties into estimated forces. This network employs a layer of nodes that have fixed broad tunings that generalize across the input domain. Learning is achieved by updating the weights of these nodes in response to training experience. This conventional model is unable to account for rapid flexibility observed in human spatial generalization during motor adaptation. However, added plasticity in the widths of the basis function tunings can achieve this flexibility, and several neurophysiological experiments have revealed flexibility in tunings of sensorimotor neurons. We found a model, Locally Weighted Projection Regression (LWPR), which uniquely possesses the structure of a basis function network in which both the weights and tuning widths of the nodes are updated incrementally during adaptation. We presented this LWPR model with training functions of different spatial complexities and monitored incremental updates to receptive field widths. An inverse pattern of dependence of receptive field adaptation on experienced error became evident, underlying both a relationship between generalization and complexity, and a unique behavior in which generalization always narrows after a sudden switch in environmental complexity. These results implicate a model that is flexible in both basis function widths and weights, like LWPR, as a viable alternative model for human motor adaptation that can account for previously observed plasticity in spatial generalization. This theory can be tested by using the behaviors observed in our experiments as novel hypotheses in human studies. PMID:23888141

  2. Coestimation of recombination, substitution and molecular adaptation rates by approximate Bayesian computation.

    PubMed

    Lopes, J S; Arenas, M; Posada, D; Beaumont, M A

    2014-03-01

    The estimation of parameters in molecular evolution may be biased when some processes are not considered. For example, the estimation of selection at the molecular level using codon-substitution models can have an upward bias when recombination is ignored. Here we address the joint estimation of recombination, molecular adaptation and substitution rates from coding sequences using approximate Bayesian computation (ABC). We describe the implementation of a regression-based strategy for choosing subsets of summary statistics for coding data, and show that this approach can accurately infer recombination allowing for intracodon recombination breakpoints, molecular adaptation and codon substitution rates. We demonstrate that our ABC approach can outperform other analytical methods under a variety of evolutionary scenarios. We also show that although the choice of the codon-substitution model is important, our inferences are robust to a moderate degree of model misspecification. In addition, we demonstrate that our approach can accurately choose the evolutionary model that best fits the data, providing an alternative for when the use of full-likelihood methods is impracticable. Finally, we applied our ABC method to co-estimate recombination, substitution and molecular adaptation rates from 24 published human immunodeficiency virus 1 coding data sets.

  3. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.

    PubMed

    Liu, Zhiyuan; Wang, Changhui

    2015-10-23

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.

  4. Study of cyanotoxins presence from experimental cyanobacteria concentrations using a new data mining methodology based on multivariate adaptive regression splines in Trasona reservoir (Northern Spain).

    PubMed

    Garcia Nieto, P J; Sánchez Lasheras, F; de Cos Juez, F J; Alonso Fernández, J R

    2011-11-15

    There is an increasing need to describe cyanobacteria blooms since some cyanobacteria produce toxins, termed cyanotoxins. These latter can be toxic and dangerous to humans as well as other animals and life in general. It must be remarked that the cyanobacteria are reproduced explosively under certain conditions. This results in algae blooms, which can become harmful to other species if the cyanobacteria involved produce cyanotoxins. In this research work, the evolution of cyanotoxins in Trasona reservoir (Principality of Asturias, Northern Spain) was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. The results of the present study are two-fold. On one hand, the importance of the different kind of cyanobacteria over the presence of cyanotoxins in the reservoir is presented through the MARS model and on the other hand a predictive model able to forecast the possible presence of cyanotoxins in a short term was obtained. The agreement of the MARS model with experimental data confirmed the good performance of the same one. Finally, conclusions of this innovative research are exposed. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors.

    PubMed

    Woodard, Dawn B; Crainiceanu, Ciprian; Ruppert, David

    2013-01-01

    We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials.

  6. A novel and simple test of gait adaptability predicts gold standard measures of functional mobility in stroke survivors.

    PubMed

    Hollands, K L; Pelton, T A; van der Veen, S; Alharbi, S; Hollands, M A

    2016-01-01

    Although there is evidence that stroke survivors have reduced gait adaptability, the underlying mechanisms and the relationship to functional recovery are largely unknown. We explored the relationships between walking adaptability and clinical measures of balance, motor recovery and functional ability in stroke survivors. Stroke survivors (n=42) stepped to targets, on a 6m walkway, placed to elicit step lengthening, shortening and narrowing on paretic and non-paretic sides. The number of targets missed during six walks and target stepping speed was recorded. Fugl-Meyer (FM), Berg Balance Scale (BBS), self-selected walking speed (SWWS) and single support (SS) and step length (SL) symmetry (using GaitRite when not walking to targets) were also assessed. Stepwise multiple-linear regression was used to model the relationships between: total targets missed, number missed with paretic and non-paretic legs, target stepping speed, and each clinical measure. Regression revealed a significant model for each outcome variable that included only one independent variable. Targets missed by the paretic limb, was a significant predictor of FM (F(1,40)=6.54, p=0.014,). Speed of target stepping was a significant predictor of each of BBS (F(1,40)=26.36, p<0.0001), SSWS (F(1,40)=37.00, p<0.0001). No variables were significant predictors of SL or SS asymmetry. Speed of target stepping was significantly predictive of BBS and SSWS and paretic targets missed predicted FM, suggesting that fast target stepping requires good balance and accurate stepping demands good paretic leg function. The relationships between these parameters indicate gait adaptability is a clinically meaningful target for measurement and treatment of functionally adaptive walking ability in stroke survivors. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  8. Fast Identification of Biological Pathways Associated with a Quantitative Trait Using Group Lasso with Overlaps

    PubMed Central

    Silver, Matt; Montana, Giovanni

    2012-01-01

    Where causal SNPs (single nucleotide polymorphisms) tend to accumulate within biological pathways, the incorporation of prior pathways information into a statistical model is expected to increase the power to detect true associations in a genetic association study. Most existing pathways-based methods rely on marginal SNP statistics and do not fully exploit the dependence patterns among SNPs within pathways. We use a sparse regression model, with SNPs grouped into pathways, to identify causal pathways associated with a quantitative trait. Notable features of our “pathways group lasso with adaptive weights” (P-GLAW) algorithm include the incorporation of all pathways in a single regression model, an adaptive pathway weighting procedure that accounts for factors biasing pathway selection, and the use of a bootstrap sampling procedure for the ranking of important pathways. P-GLAW takes account of the presence of overlapping pathways and uses a novel combination of techniques to optimise model estimation, making it fast to run, even on whole genome datasets. In a comparison study with an alternative pathways method based on univariate SNP statistics, our method demonstrates high sensitivity and specificity for the detection of important pathways, showing the greatest relative gains in performance where marginal SNP effect sizes are small. PMID:22499682

  9. Using Faculty Characteristics to Predict Attitudes toward Developmental Education

    ERIC Educational Resources Information Center

    Sides, Meredith Louise Carr

    2017-01-01

    The study adapted Astin's I-E-O model and utilized multiple regression analyses to predict faculty attitudes toward developmental education. The study utilized a cross-sectional survey design to survey faculty members at 27 different higher education institutions in the state of Alabama. The survey instrument was a self-designed questionnaire that…

  10. Dreams Fulfilled, Dreams Shattered: Determinants of Segmented Assimilation in the Second Generation

    ERIC Educational Resources Information Center

    Haller, William; Portes, Alejandro; Lynch, Scott M.

    2011-01-01

    We summarize prior theories on the adaptation process of the contemporary immigrant second generation as a prelude to presenting additive and interactive models showing the impact of family variables, school contexts and academic outcomes on the process. For this purpose, we regress indicators of educational and occupational achievement in early…

  11. Prediction of energy expenditure from heart rate and accelerometry in children and adolescents using multivariate adaptive regression splines modeling

    USDA-ARS?s Scientific Manuscript database

    Free-living measurements of 24-h total energy expenditure (TEE) and activity energy expenditure (AEE) are required to better understand the metabolic, physiological, behavioral, and environmental factors affecting energy balance and contributing to the global epidemic of childhood obesity. The spec...

  12. On state-of-charge determination for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Huang, Jun; Liaw, Bor Yann; Zhang, Jianbo

    2017-04-01

    Accurate estimation of state-of-charge (SOC) of a battery through its life remains challenging in battery research. Although improved precisions continue to be reported at times, almost all are based on regression methods empirically, while the accuracy is often not properly addressed. Here, a comprehensive review is set to address such issues, from fundamental principles that are supposed to define SOC to methodologies to estimate SOC for practical use. It covers topics from calibration, regression (including modeling methods) to validation in terms of precision and accuracy. At the end, we intend to answer the following questions: 1) can SOC estimation be self-adaptive without bias? 2) Why Ah-counting is a necessity in almost all battery-model-assisted regression methods? 3) How to establish a consistent framework of coupling in multi-physics battery models? 4) To assess the accuracy in SOC estimation, statistical methods should be employed to analyze factors that contribute to the uncertainty. We hope, through this proper discussion of the principles, accurate SOC estimation can be widely achieved.

  13. The effect of occlusion on the semantics of projective spatial terms: a case study in grounding language in perception.

    PubMed

    Kelleher, John D; Ross, Robert J; Sloan, Colm; Mac Namee, Brian

    2011-02-01

    Although data-driven spatial template models provide a practical and cognitively motivated mechanism for characterizing spatial term meaning, the influence of perceptual rather than solely geometric and functional properties has yet to be systematically investigated. In the light of this, in this paper, we investigate the effects of the perceptual phenomenon of object occlusion on the semantics of projective terms. We did this by conducting a study to test whether object occlusion had a noticeable effect on the acceptance values assigned to projective terms with respect to a 2.5-dimensional visual stimulus. Based on the data collected, a regression model was constructed and presented. Subsequent analysis showed that the regression model that included the occlusion factor outperformed an adaptation of Regier & Carlson's well-regarded AVS model for that same spatial configuration.

  14. Summer and winter habitat suitability of Marco Polo argali in southeastern Tajikistan: A modeling approach.

    PubMed

    Salas, Eric Ariel L; Valdez, Raul; Michel, Stefan

    2017-11-01

    We modeled summer and winter habitat suitability of Marco Polo argali in the Pamir Mountains in southeastern Tajikistan using these statistical algorithms: Generalized Linear Model, Random Forest, Boosted Regression Tree, Maxent, and Multivariate Adaptive Regression Splines. Using sheep occurrence data collected from 2009 to 2015 and a set of selected habitat predictors, we produced summer and winter habitat suitability maps and determined the important habitat suitability predictors for both seasons. Our results demonstrated that argali selected proximity to riparian areas and greenness as the two most relevant variables for summer, and the degree of slope (gentler slopes between 0° to 20°) and Landsat temperature band for winter. The terrain roughness was also among the most important variables in summer and winter models. Aspect was only significant for winter habitat, with argali preferring south-facing mountain slopes. We evaluated various measures of model performance such as the Area Under the Curve (AUC) and the True Skill Statistic (TSS). Comparing the five algorithms, the AUC scored highest for Boosted Regression Tree in summer (AUC = 0.94) and winter model runs (AUC = 0.94). In contrast, Random Forest underperformed in both model runs.

  15. Locoregional Control of Non-Small Cell Lung Cancer in Relation to Automated Early Assessment of Tumor Regression on Cone Beam Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brink, Carsten, E-mail: carsten.brink@rsyd.dk; Laboratory of Radiation Physics, Odense University Hospital; Bernchou, Uffe

    2014-07-15

    Purpose: Large interindividual variations in volume regression of non-small cell lung cancer (NSCLC) are observable on standard cone beam computed tomography (CBCT) during fractionated radiation therapy. Here, a method for automated assessment of tumor volume regression is presented and its potential use in response adapted personalized radiation therapy is evaluated empirically. Methods and Materials: Automated deformable registration with calculation of the Jacobian determinant was applied to serial CBCT scans in a series of 99 patients with NSCLC. Tumor volume at the end of treatment was estimated on the basis of the first one third and two thirds of the scans.more » The concordance between estimated and actual relative volume at the end of radiation therapy was quantified by Pearson's correlation coefficient. On the basis of the estimated relative volume, the patients were stratified into 2 groups having volume regressions below or above the population median value. Kaplan-Meier plots of locoregional disease-free rate and overall survival in the 2 groups were used to evaluate the predictive value of tumor regression during treatment. Cox proportional hazards model was used to adjust for other clinical characteristics. Results: Automatic measurement of the tumor regression from standard CBCT images was feasible. Pearson's correlation coefficient between manual and automatic measurement was 0.86 in a sample of 9 patients. Most patients experienced tumor volume regression, and this could be quantified early into the treatment course. Interestingly, patients with pronounced volume regression had worse locoregional tumor control and overall survival. This was significant on patient with non-adenocarcinoma histology. Conclusions: Evaluation of routinely acquired CBCT images during radiation therapy provides biological information on the specific tumor. This could potentially form the basis for personalized response adaptive therapy.« less

  16. Analysis and improvement measures of flight delay in China

    NASA Astrophysics Data System (ADS)

    Zang, Yuhang

    2017-03-01

    Firstly, this paper establishes the principal component regression model to analyze the data quantitatively, based on principal component analysis to get the three principal component factors of flight delays. Then the least square method is used to analyze the factors and obtained the regression equation expression by substitution, and then found that the main reason for flight delays is airlines, followed by weather and traffic. Aiming at the above problems, this paper improves the controllable aspects of traffic flow control. For reasons of traffic flow control, an adaptive genetic queuing model is established for the runway terminal area. This paper, establish optimization method that fifteen planes landed simultaneously on the three runway based on Beijing capital international airport, comparing the results with the existing FCFS algorithm, the superiority of the model is proved.

  17. Consistent model identification of varying coefficient quantile regression with BIC tuning parameter selection

    PubMed Central

    Zheng, Qi; Peng, Limin

    2016-01-01

    Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index. We establish the oracle properties of the resulting estimator of coefficient function. Furthermore, we formally investigate a BIC-type uniform tuning parameter selector and show that it can ensure consistent model selection. Our numerical studies confirm the theoretical findings and illustrate an application of the new variable selection procedure. PMID:28008212

  18. Severity-Based Adaptation with Limited Data for ASR to Aid Dysarthric Speakers

    PubMed Central

    Mustafa, Mumtaz Begum; Salim, Siti Salwah; Mohamed, Noraini; Al-Qatab, Bassam; Siong, Chng Eng

    2014-01-01

    Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data. PMID:24466004

  19. Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.

    PubMed

    Hoya, T; Chambers, J A

    2001-01-01

    In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.

  20. Automatic segmentation and classification of mycobacterium tuberculosis with conventional light microscopy

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui

    2015-12-01

    This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.

  1. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  2. An immune clock of human pregnancy

    PubMed Central

    Aghaeepour, Nima; Ganio, Edward A.; Mcilwain, David; Tsai, Amy S.; Tingle, Martha; Van Gassen, Sofie; Gaudilliere, Dyani K.; Baca, Quentin; McNeil, Leslie; Okada, Robin; Ghaemi, Mohammad S.; Furman, David; Wong, Ronald J.; Winn, Virginia D.; Druzin, Maurice L.; El-Sayed, Yaser Y.; Quaintance, Cecele; Gibbs, Ronald; Darmstadt, Gary L.; Shaw, Gary M.; Stevenson, David K.; Tibshirani, Robert; Nolan, Garry P.; Lewis, David B.; Angst, Martin S.; Gaudilliere, Brice

    2017-01-01

    The maintenance of pregnancy relies on finely tuned immune adaptations. We demonstrate that these adaptations are precisely timed, reflecting an immune clock of pregnancy in women delivering at term. Using mass cytometry, the abundance and functional responses of all major immune cell subsets were quantified in serial blood samples collected throughout pregnancy. Cell signaling–based Elastic Net, a regularized regression method adapted from the elastic net algorithm, was developed to infer and prospectively validate a predictive model of interrelated immune events that accurately captures the chronology of pregnancy. Model components highlighted existing knowledge and revealed previously unreported biology, including a critical role for the interleukin-2–dependent STAT5ab signaling pathway in modulating T cell function during pregnancy. These findings unravel the precise timing of immunological events occurring during a term pregnancy and provide the analytical framework to identify immunological deviations implicated in pregnancy-related pathologies. PMID:28864494

  3. The challenge of staying happier: testing the Hedonic Adaptation Prevention model.

    PubMed

    Sheldon, Kennon M; Lyubomirsky, Sonja

    2012-05-01

    The happiness that comes from a particular success or change in fortune abates with time. The Hedonic Adaptation Prevention (HAP) model specifies two routes by which the well-being gains derived from a positive life change are eroded--the first involving bottom-up processes (i.e., declining positive emotions generated by the positive change) and the second involving top-down processes (i.e., increased aspirations for even more positivity). The model also specifies two moderators that can forestall these processes--continued appreciation of the original life change and continued variety in change-related experiences. The authors formally tested the predictions of the HAP model in a 3-month three-wave longitudinal study of 481 students. Temporal path analyses and moderated regression analyses provided good support for the model. Implications for the stability of well-being, the feasibility of "the pursuit of happiness," and the appeal of overconsumption are discussed.

  4. Multi-parameters monitoring during traditional Chinese medicine concentration process with near infrared spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Liu, Ronghua; Sun, Qiaofeng; Hu, Tian; Li, Lian; Nie, Lei; Wang, Jiayue; Zhou, Wanhui; Zang, Hengchang

    2018-03-01

    As a powerful process analytical technology (PAT) tool, near infrared (NIR) spectroscopy has been widely used in real-time monitoring. In this study, NIR spectroscopy was applied to monitor multi-parameters of traditional Chinese medicine (TCM) Shenzhiling oral liquid during the concentration process to guarantee the quality of products. Five lab scale batches were employed to construct quantitative models to determine five chemical ingredients and physical change (samples density) during concentration process. The paeoniflorin, albiflorin, liquiritin and samples density were modeled by partial least square regression (PLSR), while the content of the glycyrrhizic acid and cinnamic acid were modeled by support vector machine regression (SVMR). Standard normal variate (SNV) and/or Savitzkye-Golay (SG) smoothing with derivative methods were adopted for spectra pretreatment. Variable selection methods including correlation coefficient (CC), competitive adaptive reweighted sampling (CARS) and interval partial least squares regression (iPLS) were performed for optimizing the models. The results indicated that NIR spectroscopy was an effective tool to successfully monitoring the concentration process of Shenzhiling oral liquid.

  5. Determining minimum staffing levels during snowstorms using an integrated simulation, regression, and reliability model.

    PubMed

    Kunkel, Amber; McLay, Laura A

    2013-03-01

    Emergency medical services (EMS) provide life-saving care and hospital transport to patients with severe trauma or medical conditions. Severe weather events, such as snow events, may lead to adverse patient outcomes by increasing call volumes and service times. Adequate staffing levels during such weather events are critical for ensuring that patients receive timely care. To determine staffing levels that depend on weather, we propose a model that uses a discrete event simulation of a reliability model to identify minimum staffing levels that provide timely patient care, with regression used to provide the input parameters. The system is said to be reliable if there is a high degree of confidence that ambulances can immediately respond to a given proportion of patients (e.g., 99 %). Four weather scenarios capture varying levels of snow falling and snow on the ground. An innovative feature of our approach is that we evaluate the mitigating effects of different extrinsic response policies and intrinsic system adaptation. The models use data from Hanover County, Virginia to quantify how snow reduces EMS system reliability and necessitates increasing staffing levels. The model and its analysis can assist in EMS preparedness by providing a methodology to adjust staffing levels during weather events. A key observation is that when it is snowing, intrinsic system adaptation has similar effects on system reliability as one additional ambulance.

  6. [Hungarian health resource allocation from the viewpoint of the English methodology].

    PubMed

    Fadgyas-Freyler, Petra

    2018-02-01

    This paper describes both the English health resource allocation and the attempt of its Hungarian adaptation. We describe calculations for a Hungarian regression model using the English 'weighted capitation formula'. The model has proven statistically correct. New independent variables and homogenous regional units have to be found for Hungary. The English method can be used with adequate variables. Hungarian patient-level health data can support a much more sophisticated model. Further research activities are needed. Orv Hetil. 2018; 159(5): 183-191.

  7. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  8. Hypnotizability as a Function of Repression, Adaptive Regression, and Mood

    ERIC Educational Resources Information Center

    Silver, Maurice Joseph

    1974-01-01

    Forty male undergraduates were assessed in a personality assessment session and a hypnosis session. The personality traits studied were repressive style and adaptive regression, while the transitory variable was mood prior to hypnosis. Hypnotizability was a significant interactive function of repressive style and mood, but not of adaptive…

  9. Adaptive variation in Pinus ponderosa from Intermountain Regions. I. Snake and Salmon River Basins

    Treesearch

    Gerald Rehfeldt

    1986-01-01

    Genetic differentiation of 64 populations from central Idaho was studied in field, greenhouse and laboratory tests. Analyses of variables reflecting growth potential, phenology, morphology, cold hardiness and periodicity of shoot elongation revealed population differentiation for a variety of traits. Regression models related as much as 61 percent of the variance among...

  10. Approaches to Forecasting Demands for Library Network Services. Report No. 10.

    ERIC Educational Resources Information Center

    Kang, Jong Hoa

    The problem of forecasting monthly demands for library network services is considered in terms of using forecasts as inputs to policy analysis models, and in terms of using forecasts to aid in the making of budgeting and staffing decisions. Box-Jenkins time-series methodology, adaptive filtering, and regression approaches are examined and compared…

  11. Cross-sectional time series and multivariate adaptive regression splines models using accelerometry and heart rate predict energy expenditure of preschoolers

    USDA-ARS?s Scientific Manuscript database

    Prediction equations of energy expenditure (EE) using accelerometers and miniaturized heart rate (HR) monitors have been developed in older children and adults but not in preschool-aged children. Because the relationships between accelerometer counts (ACs), HR, and EE are confounded by growth and ma...

  12. Adaptive and predictive control of a simulated robot arm.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Garrido, Jesús A; Luque, Niceto R; Ros, Eduardo

    2013-06-01

    In this work, a basic cerebellar neural layer and a machine learning engine are embedded in a recurrent loop which avoids dealing with the motor error or distal error problem. The presented approach learns the motor control based on available sensor error estimates (position, velocity, and acceleration) without explicitly knowing the motor errors. The paper focuses on how to decompose the input into different components in order to facilitate the learning process using an automatic incremental learning model (locally weighted projection regression (LWPR) algorithm). LWPR incrementally learns the forward model of the robot arm and provides the cerebellar module with optimal pre-processed signals. We present a recurrent adaptive control architecture in which an adaptive feedback (AF) controller guarantees a precise, compliant, and stable control during the manipulation of objects. Therefore, this approach efficiently integrates a bio-inspired module (cerebellar circuitry) with a machine learning component (LWPR). The cerebellar-LWPR synergy makes the robot adaptable to changing conditions. We evaluate how this scheme scales for robot-arms of a high number of degrees of freedom (DOFs) using a simulated model of a robot arm of the new generation of light weight robots (LWRs).

  13. Improvement of near infrared spectroscopic (NIRS) analysis of caffeine in roasted Arabica coffee by variable selection method of stability competitive adaptive reweighted sampling (SCARS).

    PubMed

    Zhang, Xuan; Li, Wei; Yin, Bin; Chen, Weizhong; Kelly, Declan P; Wang, Xiaoxin; Zheng, Kaiyi; Du, Yiping

    2013-10-01

    Coffee is the most heavily consumed beverage in the world after water, for which quality is a key consideration in commercial trade. Therefore, caffeine content which has a significant effect on the final quality of the coffee products requires to be determined fast and reliably by new analytical techniques. The main purpose of this work was to establish a powerful and practical analytical method based on near infrared spectroscopy (NIRS) and chemometrics for quantitative determination of caffeine content in roasted Arabica coffees. Ground coffee samples within a wide range of roasted levels were analyzed by NIR, meanwhile, in which the caffeine contents were quantitative determined by the most commonly used HPLC-UV method as the reference values. Then calibration models based on chemometric analyses of the NIR spectral data and reference concentrations of coffee samples were developed. Partial least squares (PLS) regression was used to construct the models. Furthermore, diverse spectra pretreatment and variable selection techniques were applied in order to obtain robust and reliable reduced-spectrum regression models. Comparing the respective quality of the different models constructed, the application of second derivative pretreatment and stability competitive adaptive reweighted sampling (SCARS) variable selection provided a notably improved regression model, with root mean square error of cross validation (RMSECV) of 0.375 mg/g and correlation coefficient (R) of 0.918 at PLS factor of 7. An independent test set was used to assess the model, with the root mean square error of prediction (RMSEP) of 0.378 mg/g, mean relative error of 1.976% and mean relative standard deviation (RSD) of 1.707%. Thus, the results provided by the high-quality calibration model revealed the feasibility of NIR spectroscopy for at-line application to predict the caffeine content of unknown roasted coffee samples, thanks to the short analysis time of a few seconds and non-destructive advantages of NIRS. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Improvement of near infrared spectroscopic (NIRS) analysis of caffeine in roasted Arabica coffee by variable selection method of stability competitive adaptive reweighted sampling (SCARS)

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Li, Wei; Yin, Bin; Chen, Weizhong; Kelly, Declan P.; Wang, Xiaoxin; Zheng, Kaiyi; Du, Yiping

    2013-10-01

    Coffee is the most heavily consumed beverage in the world after water, for which quality is a key consideration in commercial trade. Therefore, caffeine content which has a significant effect on the final quality of the coffee products requires to be determined fast and reliably by new analytical techniques. The main purpose of this work was to establish a powerful and practical analytical method based on near infrared spectroscopy (NIRS) and chemometrics for quantitative determination of caffeine content in roasted Arabica coffees. Ground coffee samples within a wide range of roasted levels were analyzed by NIR, meanwhile, in which the caffeine contents were quantitative determined by the most commonly used HPLC-UV method as the reference values. Then calibration models based on chemometric analyses of the NIR spectral data and reference concentrations of coffee samples were developed. Partial least squares (PLS) regression was used to construct the models. Furthermore, diverse spectra pretreatment and variable selection techniques were applied in order to obtain robust and reliable reduced-spectrum regression models. Comparing the respective quality of the different models constructed, the application of second derivative pretreatment and stability competitive adaptive reweighted sampling (SCARS) variable selection provided a notably improved regression model, with root mean square error of cross validation (RMSECV) of 0.375 mg/g and correlation coefficient (R) of 0.918 at PLS factor of 7. An independent test set was used to assess the model, with the root mean square error of prediction (RMSEP) of 0.378 mg/g, mean relative error of 1.976% and mean relative standard deviation (RSD) of 1.707%. Thus, the results provided by the high-quality calibration model revealed the feasibility of NIR spectroscopy for at-line application to predict the caffeine content of unknown roasted coffee samples, thanks to the short analysis time of a few seconds and non-destructive advantages of NIRS.

  15. Analysing Twitter and web queries for flu trend prediction.

    PubMed

    Santos, José Carlos; Matos, Sérgio

    2014-05-07

    Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p<0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.

  16. Physician leadership styles and effectiveness: an empirical study.

    PubMed

    Xirasagar, Sudha; Samuels, Michael E; Stoskopf, Carleen H

    2005-12-01

    The authors study the association between physician leadership styles and leadership effectiveness. Executive directors of community health centers were surveyed (269 respondents; response rate = 40.9 percent) for their perceptions of the medical director's leadership behaviors and effectiveness, using an adapted Multifactor Leadership Questionnaire (43 items on a 0-4 point Likert-type scale), with additional questions on demographics and the center's clinical goals and achievements. The authors hypothesize that transformational leadership would be more positively associated with executive directors' ratings of effectiveness, satisfaction with the leader, and subordinate extra effort, as well as the center's clinical goal achievement, than transactional or laissez-faire leadership. Separate ordinary least squares regressions were used to model each of the effectiveness measures, and general linear model regression was used to model clinical goal achievement. Results support the hypothesis and suggest that physician leadership development using the transformational leadership model may result in improved health care quality and cost control.

  17. Real-time quality monitoring in debutanizer column with regression tree and ANFIS

    NASA Astrophysics Data System (ADS)

    Siddharth, Kumar; Pathak, Amey; Pani, Ajaya Kumar

    2018-05-01

    A debutanizer column is an integral part of any petroleum refinery. Online composition monitoring of debutanizer column outlet streams is highly desirable in order to maximize the production of liquefied petroleum gas. In this article, data-driven models for debutanizer column are developed for real-time composition monitoring. The dataset used has seven process variables as inputs and the output is the butane concentration in the debutanizer column bottom product. The input-output dataset is divided equally into a training (calibration) set and a validation (testing) set. The training set data were used to develop fuzzy inference, adaptive neuro fuzzy (ANFIS) and regression tree models for the debutanizer column. The accuracy of the developed models were evaluated by simulation of the models with the validation dataset. It is observed that the ANFIS model has better estimation accuracy than other models developed in this work and many data-driven models proposed so far in the literature for the debutanizer column.

  18. Coping with post-war mental health problems among survivors of violence in Northern Uganda: Findings from the WAYS study.

    PubMed

    Amone-P'Olak, Kennedy; Omech, Bernard

    2018-05-01

    Cognitive emotion regulation strategies and mental health problems were assessed in a sample of war-affected youth in Northern Uganda. Univariable and multivariable regression models were fitted to assess the influence of CERS on mental health problems. Maladaptive cognitive emotion regulation strategies (e.g., rumination) were significantly associated with more mental health problems while adaptive cognitive emotion regulation strategies (e.g., putting into perspective) were associated with reporting fewer symptoms of mental health problems. The youth with significant scores on mental health problems (scores ≥ 85th percentile) reported more frequent use of maladaptive than adaptive strategies. Interventions to reduce mental health problems should focus on enhancing the use of adaptive strategies.

  19. Body Fat Percentage Prediction Using Intelligent Hybrid Approaches

    PubMed Central

    Shao, Yuehjen E.

    2014-01-01

    Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR), artificial neural network (ANN), multivariate adaptive regression splines (MARS), and support vector regression (SVR) techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models. PMID:24723804

  20. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine

    PubMed Central

    Liu, Zhiyuan; Wang, Changhui

    2015-01-01

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675

  1. Spatio-Temporal Regression Based Clustering of Precipitation Extremes in a Presence of Systematically Missing Covariates

    NASA Astrophysics Data System (ADS)

    Kaiser, Olga; Martius, Olivia; Horenko, Illia

    2017-04-01

    Regression based Generalized Pareto Distribution (GPD) models are often used to describe the dynamics of hydrological threshold excesses relying on the explicit availability of all of the relevant covariates. But, in real application the complete set of relevant covariates might be not available. In this context, it was shown that under weak assumptions the influence coming from systematically missing covariates can be reflected by a nonstationary and nonhomogenous dynamics. We present a data-driven, semiparametric and an adaptive approach for spatio-temporal regression based clustering of threshold excesses in a presence of systematically missing covariates. The nonstationary and nonhomogenous behavior of threshold excesses is describes by a set of local stationary GPD models, where the parameters are expressed as regression models, and a non-parametric spatio-temporal hidden switching process. Exploiting nonparametric Finite Element time-series analysis Methodology (FEM) with Bounded Variation of the model parameters (BV) for resolving the spatio-temporal switching process, the approach goes beyond strong a priori assumptions made is standard latent class models like Mixture Models and Hidden Markov Models. Additionally, the presented FEM-BV-GPD provides a pragmatic description of the corresponding spatial dependence structure by grouping together all locations that exhibit similar behavior of the switching process. The performance of the framework is demonstrated on daily accumulated precipitation series over 17 different locations in Switzerland from 1981 till 2013 - showing that the introduced approach allows for a better description of the historical data.

  2. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data

    PubMed Central

    García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio

    2016-01-01

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed. PMID:28787882

  3. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data.

    PubMed

    García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio

    2016-01-28

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC-MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc . Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC-MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.

  4. Robust colour calibration of an imaging system using a colour space transform and advanced regression modelling.

    PubMed

    Jackman, Patrick; Sun, Da-Wen; Elmasry, Gamal

    2012-08-01

    A new algorithm for the conversion of device dependent RGB colour data into device independent L*a*b* colour data without introducing noticeable error has been developed. By combining a linear colour space transform and advanced multiple regression methodologies it was possible to predict L*a*b* colour data with less than 2.2 colour units of error (CIE 1976). By transforming the red, green and blue colour components into new variables that better reflect the structure of the L*a*b* colour space, a low colour calibration error was immediately achieved (ΔE(CAL) = 14.1). Application of a range of regression models on the data further reduced the colour calibration error substantially (multilinear regression ΔE(CAL) = 5.4; response surface ΔE(CAL) = 2.9; PLSR ΔE(CAL) = 2.6; LASSO regression ΔE(CAL) = 2.1). Only the PLSR models deteriorated substantially under cross validation. The algorithm is adaptable and can be easily recalibrated to any working computer vision system. The algorithm was tested on a typical working laboratory computer vision system and delivered only a very marginal loss of colour information ΔE(CAL) = 2.35. Colour features derived on this system were able to safely discriminate between three classes of ham with 100% correct classification whereas colour features measured on a conventional colourimeter were not. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Use of multiple regression models in the study of sandhopper orientation under natural conditions

    NASA Astrophysics Data System (ADS)

    Marchetti, Giovanni M.; Scapini, Felicita

    2003-10-01

    In sandhoppers (Amphipoda; Talitridae), typical dwellers of the supralittoral zone of sandy beaches, orientation with respect to the sun and landscape vision is adapted to the local direction of the shoreline. Variation of this behavioural adaptation can be related to the characteristics of the beach. Measures of orientation with respect to the shoreline direction can thus be made as a tool to assess beach stability versus changeability, once the sources of variation are correctly interpreted. Orientation of animals can be studied by statistical analysis of directions taken after release in nature. In this paper some new tools for exploring directional data are reviewed, with special emphasis on non-parametric smoothers and regression models. Results from a large study concerning one species of sandhoppers, Talitrus saltator (Montagu), from an exposed sandy beach in northeastern Tunisia are presented. Seasonal differences in orientation behaviour were shown with a higher scatter in autumn with respect to spring. The higher scatter shown in autumn depended both on intrinsic (sex) and external (climatic conditions and landscape visibility) factors and was related to the tendency of this species to migrate towards the dune anticipating winter conditions.

  6. Human impact on sediment fluxes within the Blue Nile and Atbara River basins

    NASA Astrophysics Data System (ADS)

    Balthazar, Vincent; Vanacker, Veerle; Girma, Atkilt; Poesen, Jean; Golla, Semunesh

    2013-01-01

    A regional assessment of the spatial variability in sediment yields allows filling the gap between detailed, process-based understanding of erosion at field scale and empirical sediment flux models at global scale. In this paper, we focus on the intrabasin variability in sediment yield within the Blue Nile and Atbara basins as biophysical and anthropogenic factors are presumably acting together to accelerate soil erosion. The Blue Nile and Atbara River systems are characterized by an important spatial variability in sediment fluxes, with area-specific sediment yield (SSY) values ranging between 4 and 4935 t/km2/y. Statistical analyses show that 41% of the observed variation in SSY can be explained by remote sensing proxy data of surface vegetation cover, rainfall intensity, mean annual temperature, and human impact. The comparison of a locally adapted regression model with global predictive sediment flux models indicates that global flux models such as the ART and BQART models are less suited to capture the spatial variability in area-specific sediment yields (SSY), but they are very efficient to predict absolute sediment yields (SY). We developed a modified version of the BQART model that estimates the human influence on sediment yield based on a high resolution composite measure of local human impact (human footprint index) instead of countrywide estimates of GNP/capita. Our modified version of the BQART is able to explain 80% of the observed variation in SY for the Blue Nile and Atbara basins and thereby performs only slightly less than locally adapted regression models.

  7. Exploring the climate change concerns of striped catfish producers in the Mekong Delta, Vietnam.

    PubMed

    Nguyen, Anh Lam; Truong, Minh Hoang; Verreth, Johan Aj; Leemans, Rik; Bosma, Roel H; De Silva, Sena S

    2015-01-01

    This study investigated the perceptions on and adaptations to climate change impacts of 235 pangasius farmers in the Mekong Delta, Vietnam. Data were collected using semi-structured household surveys in six provinces, from three regions along the Mekong river branches. A Chi-Square test was used to determine the association between variables, and a logit regression model was employed to identify factors correlated with farmer's perception and adaptation. Less than half of respondents were concerned about climate change and sought suitable adaptation measures to alleviate its impacts. Improving information on climate change and introducing early warning systems could improve the adaptive capacity of pangasius farmers, in particularly for those farmers, who were not concerned yet. Farmers relied strongly on technical support from government agencies, but farmers in the coastal provinces did not express the need for training by these institutions. This contrasting result requires further assessment of the effectiveness of adaptation measures such as breeding salinity tolerant pangasius.

  8. Multilevel regression analyses to investigate the relationship between two variables over time: examining the longitudinal association between intrusion and avoidance.

    PubMed

    Suvak, Michael K; Walling, Sherry M; Iverson, Katherine M; Taft, Casey T; Resick, Patricia A

    2009-12-01

    Multilevel modeling is a powerful and flexible framework for analyzing nested data structures (e.g., repeated measures or longitudinal designs). The authors illustrate a series of multilevel regression procedures that can be used to elucidate the nature of the relationship between two variables across time. The goal is to help trauma researchers become more aware of the utility of multilevel modeling as a tool for increasing the field's understanding of posttraumatic adaptation. These procedures are demonstrated by examining the relationship between two posttraumatic symptoms, intrusion and avoidance, across five assessment points in a sample of rape and robbery survivors (n = 286). Results revealed that changes in intrusion were highly correlated with changes in avoidance over the 18-month posttrauma period.

  9. Factor regression for interpreting genotype-environment interaction in bread-wheat trials.

    PubMed

    Baril, C P

    1992-05-01

    The French INRA wheat (Triticum aestivum L. em Thell.) breeding program is based on multilocation trials to produce high-yielding, adapted lines for a wide range of environments. Differential genotypic responses to variable environment conditions limit the accuracy of yield estimations. Factor regression was used to partition the genotype-environment (GE) interaction into four biologically interpretable terms. Yield data were analyzed from 34 wheat genotypes grown in four environments using 12 auxiliary agronomic traits as genotypic and environmental covariates. Most of the GE interaction (91%) was explained by the combination of only three traits: 1,000-kernel weight, lodging susceptibility and spike length. These traits are easily measured in breeding programs, therefore factor regression model can provide a convenient and useful prediction method of yield.

  10. Evaluating Internal Model Strength and Performance of Myoelectric Prosthesis Control Strategies.

    PubMed

    Shehata, Ahmed W; Scheme, Erik J; Sensinger, Jonathon W

    2018-05-01

    On-going developments in myoelectric prosthesis control have provided prosthesis users with an assortment of control strategies that vary in reliability and performance. Many studies have focused on improving performance by providing feedback to the user but have overlooked the effect of this feedback on internal model development, which is key to improve long-term performance. In this paper, the strength of internal models developed for two commonly used myoelectric control strategies: raw control with raw feedback (using a regression-based approach) and filtered control with filtered feedback (using a classifier-based approach), were evaluated using two psychometric measures: trial-by-trial adaptation and just-noticeable difference. The performance of both strategies was also evaluated using Schmidt's style target acquisition task. Results obtained from 24 able-bodied subjects showed that although filtered control with filtered feedback had better short-term performance in path efficiency ( ), raw control with raw feedback resulted in stronger internal model development ( ), which may lead to better long-term performance. Despite inherent noise in the control signals of the regression controller, these findings suggest that rich feedback associated with regression control may be used to improve human understanding of the myoelectric control system.

  11. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform

    PubMed Central

    Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979

  12. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.

    PubMed

    Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.

  13. Online adaptive decision trees: pattern classification and function approximation.

    PubMed

    Basak, Jayanta

    2006-09-01

    Recently we have shown that decision trees can be trained in the online adaptive (OADT) mode (Basak, 2004), leading to better generalization score. OADTs were bottlenecked by the fact that they are able to handle only two-class classification tasks with a given structure. In this article, we provide an architecture based on OADT, ExOADT, which can handle multiclass classification tasks and is able to perform function approximation. ExOADT is structurally similar to OADT extended with a regression layer. We also show that ExOADT is capable not only of adapting the local decision hyperplanes in the nonterminal nodes but also has the potential of smoothly changing the structure of the tree depending on the data samples. We provide the learning rules based on steepest gradient descent for the new model ExOADT. Experimentally we demonstrate the effectiveness of ExOADT in the pattern classification and function approximation tasks. Finally, we briefly discuss the relationship of ExOADT with other classification models.

  14. Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data

    PubMed Central

    Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.

    2016-01-01

    We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872

  15. Novel applications of multitask learning and multiple output regression to multiple genetic trait prediction.

    PubMed

    He, Dan; Kuhn, David; Parida, Laxmi

    2016-06-15

    Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait prediction is usually represented as linear regression models. In many cases, for the same set of samples and markers, multiple traits are observed. Some of these traits might be correlated with each other. Therefore, modeling all the multiple traits together may improve the prediction accuracy. In this work, we view the multitrait prediction problem from a machine learning angle: as either a multitask learning problem or a multiple output regression problem, depending on whether different traits share the same genotype matrix or not. We then adapted multitask learning algorithms and multiple output regression algorithms to solve the multitrait prediction problem. We proposed a few strategies to improve the least square error of the prediction from these algorithms. Our experiments show that modeling multiple traits together could improve the prediction accuracy for correlated traits. The programs we used are either public or directly from the referred authors, such as MALSAR (http://www.public.asu.edu/~jye02/Software/MALSAR/) package. The Avocado data set has not been published yet and is available upon request. dhe@us.ibm.com. © The Author 2016. Published by Oxford University Press.

  16. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  17. Vegetation Monitoring with Gaussian Processes and Latent Force Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, Gustau; Svendsen, Daniel; Martino, Luca; Campos, Manuel; Luengo, David

    2017-04-01

    Monitoring vegetation by biophysical parameter retrieval from Earth observation data is a challenging problem, where machine learning is currently a key player. Neural networks, kernel methods, and Gaussian Process (GP) regression have excelled in parameter retrieval tasks at both local and global scales. GP regression is based on solid Bayesian statistics, yield efficient and accurate parameter estimates, and provides interesting advantages over competing machine learning approaches such as confidence intervals. However, GP models are hampered by lack of interpretability, that prevented the widespread adoption by a larger community. In this presentation we will summarize some of our latest developments to address this issue. We will review the main characteristics of GPs and their advantages in vegetation monitoring standard applications. Then, three advanced GP models will be introduced. First, we will derive sensitivity maps for the GP predictive function that allows us to obtain feature ranking from the model and to assess the influence of examples in the solution. Second, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated radiative transfer data in a single GP model. The JGP regression provides more sensible confidence intervals for the predictions, respects the physics of the underlying processes, and allows for transferability across time and space. Finally, a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical models of the system is presented. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Empirical evidence of the performance of these models will be presented through illustrative examples.

  18. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  19. Ventricular Repolarization Adaptation to Abrupt Changes in Heart Rate after Microgravity Simulation by 5-Day Head-Down Bed Rest

    NASA Astrophysics Data System (ADS)

    Bolea, J.; Almeida, R.; Pueyo, E.; Laguna, P.; Caiani, E. G.

    2013-02-01

    Microgravity exposure for long periods of time leads to body deconditioning and it increases the risk of experiencing life-threatening arrhythmias. The study of ventricular repolarization dependence on heart rate has been used to stratify patients according to their arrhythmic risk. The QTp adaptation to HR changes is characterized by M90 (90% of the adaptation). The QTp=HR , after compensation for the adaptation lag, is modeled using a set of regression functions (a , kind of slope of relationship). Subjects with lower orthostatic tolerance time showed a non significant decrease in the adaptation lag (M90 from 148 to 108 beats), which may be due to an extra deconditioning in the sympathovagal response. Nevertheless an increase in the QTp=HR adaptation lag (M90 from 108 to 117 beats with a p = 0.06) and a significant reduction in the slope (a from 0.53 to 0.35 with a p < 0.005), which in previous studies have been correlated with an increased arrhytimic risk, were observed for subjects with higher orthostatic tolerance time.

  20. Transport modeling and multivariate adaptive regression splines for evaluating performance of ASR systems in freshwater aquifers

    NASA Astrophysics Data System (ADS)

    Forghani, Ali; Peralta, Richard C.

    2017-10-01

    The study presents a procedure using solute transport and statistical models to evaluate the performance of aquifer storage and recovery (ASR) systems designed to earn additional water rights in freshwater aquifers. The recovery effectiveness (REN) index quantifies the performance of these ASR systems. REN is the proportion of the injected water that the same ASR well can recapture during subsequent extraction periods. To estimate REN for individual ASR wells, the presented procedure uses finely discretized groundwater flow and contaminant transport modeling. Then, the procedure uses multivariate adaptive regression splines (MARS) analysis to identify the significant variables affecting REN, and to identify the most recovery-effective wells. Achieving REN values close to 100% is the desire of the studied 14-well ASR system operator. This recovery is feasible for most of the ASR wells by extracting three times the injectate volume during the same year as injection. Most of the wells would achieve RENs below 75% if extracting merely the same volume as they injected. In other words, recovering almost all the same water molecules that are injected requires having a pre-existing water right to extract groundwater annually. MARS shows that REN most significantly correlates with groundwater flow velocity, or hydraulic conductivity and hydraulic gradient. MARS results also demonstrate that maximizing REN requires utilizing the wells located in areas with background Darcian groundwater velocities less than 0.03 m/d. The study also highlights the superiority of MARS over regular multiple linear regressions to identify the wells that can provide the maximum REN. This is the first reported application of MARS for evaluating performance of an ASR system in fresh water aquifers.

  1. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. The evolution of high summit metabolism and cold tolerance in birds and its impact on present-day distributions.

    PubMed

    Swanson, David L; Garland, Theodore

    2009-01-01

    Summit metabolic rate (M(sum), maximum cold-induced metabolic rate) is positively correlated with cold tolerance in birds, suggesting that high M(sum) is important for residency in cold climates. However, the phylogenetic distribution of high M(sum) among birds and the impact of its evolution on current distributions are not well understood. Two potential adaptive hypotheses might explain the phylogenetic distribution of high M(sum) among birds. The cold adaptation hypothesis contends that species wintering in cold climates should have higher M(sum) than species wintering in warmer climates. The flight adaptation hypothesis suggests that volant birds might be capable of generating high M(sum) as a byproduct of their muscular capacity for flight; thus, variation in M(sum) should be associated with capacity for sustained flight, one indicator of which is migration. We collected M(sum) data from the literature for 44 bird species and conducted both conventional and phylogenetically informed statistical analyses to examine the predictors of M(sum) variation. Significant phylogenetic signal was present for log body mass, log mass-adjusted M(sum), and average temperature in the winter range. In multiple regression models, log body mass, winter temperature, and clade were significant predictors of log M(sum). These results are consistent with a role for climate in determining M(sum) in birds, but also indicate that phylogenetic signal remains even after accounting for associations indicative of adaptation to winter temperature. Migratory strategy was never a significant predictor of log M(sum) in multiple regressions, a result that is not consistent with the flight adaptation hypothesis.

  3. The Relationship of Item-Level Response Times with Test-Taker and Item Variables in an Operational CAT Environment. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Swygert, Kimberly A.

    In this study, data from an operational computerized adaptive test (CAT) were examined in order to gather information concerning item response times in a CAT environment. The CAT under study included multiple-choice items measuring verbal, quantitative, and analytical reasoning. The analyses included the fitting of regression models describing the…

  4. Vulnerability of carbon storage in North American boreal forests to wildfires during the 21st century

    Treesearch

    M.S. Balshi; A.D. McGuire; P. Duffy; M. Flannigan; D.W. Kicklighter; J. Melillo

    2009-01-01

    We use a gridded data set developed with a multivariate adaptive regression spline approach to determine how area burned varies each year with changing climatic and fuel moisture conditions. We apply the process-based Terrestrial Ecosystem Model to evaluate the role of future fire on the carbon dynamics of boreal North America in the context of changing atmospheric...

  5. Decoding of finger trajectory from ECoG using deep learning.

    PubMed

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  6. Decoding of finger trajectory from ECoG using deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Objective. Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. Approach. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. Main results. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. Significance. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  7. Value of Information Analysis for Time-lapse Seismic Data by Simulation-Regression

    NASA Astrophysics Data System (ADS)

    Dutta, G.; Mukerji, T.; Eidsvik, J.

    2016-12-01

    A novel method to estimate the Value of Information (VOI) of time-lapse seismic data in the context of reservoir development is proposed. VOI is a decision analytic metric quantifying the incremental value that would be created by collecting information prior to making a decision under uncertainty. The VOI has to be computed before collecting the information and can be used to justify its collection. Previous work on estimating the VOI of geophysical data has involved explicit approximation of the posterior distribution of reservoir properties given the data and then evaluating the prospect values for that posterior distribution of reservoir properties. Here, we propose to directly estimate the prospect values given the data by building a statistical relationship between them using regression. Various regression techniques such as Partial Least Squares Regression (PLSR), Multivariate Adaptive Regression Splines (MARS) and k-Nearest Neighbors (k-NN) are used to estimate the VOI, and the results compared. For a univariate Gaussian case, the VOI obtained from simulation-regression has been shown to be close to the analytical solution. Estimating VOI by simulation-regression is much less computationally expensive since the posterior distribution of reservoir properties given each possible dataset need not be modeled and the prospect values need not be evaluated for each such posterior distribution of reservoir properties. This method is flexible, since it does not require rigid model specification of posterior but rather fits conditional expectations non-parametrically from samples of values and data.

  8. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    NASA Astrophysics Data System (ADS)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  9. Interrupted time series regression for the evaluation of public health interventions: a tutorial.

    PubMed

    Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio

    2017-02-01

    Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.

  10. Interrupted time series regression for the evaluation of public health interventions: a tutorial

    PubMed Central

    Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio

    2017-01-01

    Abstract Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design. PMID:27283160

  11. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    PubMed

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets

    PubMed Central

    Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.

    2014-01-01

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  13. Macrocell path loss prediction using artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Usman, Abraham U.; Okereke, Okpo U.; Omizegba, Elijah E.

    2014-04-01

    The prediction of propagation loss is a practical non-linear function approximation problem which linear regression or auto-regression models are limited in their ability to handle. However, some computational Intelligence techniques such as artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) have been shown to have great ability to handle non-linear function approximation and prediction problems. In this study, the multiple layer perceptron neural network (MLP-NN), radial basis function neural network (RBF-NN) and an ANFIS network were trained using actual signal strength measurement taken at certain suburban areas of Bauchi metropolis, Nigeria. The trained networks were then used to predict propagation losses at the stated areas under differing conditions. The predictions were compared with the prediction accuracy of the popular Hata model. It was observed that ANFIS model gave a better fit in all cases having higher R2 values in each case and on average is more robust than MLP and RBF models as it generalises better to a different data.

  14. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Treesearch

    Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo

    2009-01-01

    We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...

  15. Family functioning mediates adaptation in caregivers of individuals with Rett syndrome.

    PubMed

    Lamb, Amanda E; Biesecker, Barbara B; Umstead, Kendall L; Muratori, Michelle; Biesecker, Leslie G; Erby, Lori H

    2016-11-01

    The objective of this study was to investigate factors related to family functioning and adaptation in caregivers of individuals with Rett syndrome (RS). A cross-sectional quantitative survey explored the relationships between demographics, parental self-efficacy, coping methods, family functioning and adaptation. A forward-backward, step-wise model selection procedure was used to evaluate variables associated with both family functioning and adaptation. Analyses also explored family functioning as a mediator of the relationship between other variables and adaptation. Bivariate analyses (N=400) revealed that greater parental self-efficacy, a greater proportion of problem-focused coping, and a lesser proportion of emotion-focused coping were associated with more effective family functioning. In addition, these key variables were significantly associated with greater adaptation, as was family functioning, while controlling for confounders. Finally, regression analyses suggest family functioning as a mediator of the relationships between three variables (parental self-efficacy, problem-focused coping, and emotion-focused coping) with adaptation. This study demonstrates the potentially predictive roles of expectations and coping methods and the mediator role of family functioning in adaptation among caregivers of individuals with RS, a chronic developmental disorder. A potential target for intervention is strengthening of caregiver competence in the parenting role to enhance caregiver adaptation. Published by Elsevier Ireland Ltd.

  16. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.

    2013-02-01

    Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local parameter estimates for all the variables and an important reduction of the autocorrelation in the residuals of the GW linear model. Despite the fitting improvement of local models, GW regression, more than an alternative to "global" or traditional regression modelling, seems to be a valuable complement to explore the non-stationary relationships between the response variable and the explanatory variables. The synergy of global and local modelling provides insights into fire management and policy and helps further our understanding of the fire problem over large areas while at the same time recognizing its local character.

  17. Quantitative monitoring of sucrose, reducing sugar and total sugar dynamics for phenotyping of water-deficit stress tolerance in rice through spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Das, Bappa; Sahoo, Rabi N.; Pargal, Sourabh; Krishna, Gopal; Verma, Rakesh; Chinnusamy, Viswanathan; Sehgal, Vinay K.; Gupta, Vinod K.; Dash, Sushanta K.; Swain, Padmini

    2018-03-01

    In the present investigation, the changes in sucrose, reducing and total sugar content due to water-deficit stress in rice leaves were modeled using visible, near infrared (VNIR) and shortwave infrared (SWIR) spectroscopy. The objectives of the study were to identify the best vegetation indices and suitable multivariate technique based on precise analysis of hyperspectral data (350 to 2500 nm) and sucrose, reducing sugar and total sugar content measured at different stress levels from 16 different rice genotypes. Spectral data analysis was done to identify suitable spectral indices and models for sucrose estimation. Novel spectral indices in near infrared (NIR) range viz. ratio spectral index (RSI) and normalised difference spectral indices (NDSI) sensitive to sucrose, reducing sugar and total sugar content were identified which were subsequently calibrated and validated. The RSI and NDSI models had R2 values of 0.65, 0.71 and 0.67; RPD values of 1.68, 1.95 and 1.66 for sucrose, reducing sugar and total sugar, respectively for validation dataset. Different multivariate spectral models such as artificial neural network (ANN), multivariate adaptive regression splines (MARS), multiple linear regression (MLR), partial least square regression (PLSR), random forest regression (RFR) and support vector machine regression (SVMR) were also evaluated. The best performing multivariate models for sucrose, reducing sugars and total sugars were found to be, MARS, ANN and MARS, respectively with respect to RPD values of 2.08, 2.44, and 1.93. Results indicated that VNIR and SWIR spectroscopy combined with multivariate calibration can be used as a reliable alternative to conventional methods for measurement of sucrose, reducing sugars and total sugars of rice under water-deficit stress as this technique is fast, economic, and noninvasive.

  18. The role of goal management for successful adaptation to arthritis.

    PubMed

    Arends, Roos Y; Bode, Christina; Taal, Erik; Van de Laar, Mart A F J

    2013-10-01

    Persons with polyarthritis often experience difficulties in attaining personal goals due to disease symptoms such as pain, fatigue and reduced mobility. This study examines the relationship of goal management strategies - goal maintenance, goal adjustment, goal disengagement, goal reengagement - with indicators of adaptation to polyarthritis, namely, depression, anxiety, purpose in life, positive affect, participation, and work participation. 305 patients diagnosed with polyarthritis participated in a questionnaire study (62% female, 29% employed, mean age: 62 years). Hierarchical multiple-regression-analyses were conducted to examine the relative importance of the goal management strategies for adaptation. Self-efficacy in relation to goal management was also studied. For all adaptation indicators, the goal management strategies added substantial explained variance to the models (R(2): .07-.27). Goal maintenance and goal adjustment were significant predictors of adaptation to polyarthritis. Self-efficacy partly mediated the influence of goal management strategies. Goal management strategies were found to be important predictors of successful adaptation to polyarthritis. Overall, adjusting goals to personal ability and circumstances and striving for goals proved to be the most beneficial strategies. Designing interventions that focus on the effective management of goals may help people to adapt to polyarthritis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Marginal regression analysis of recurrent events with coarsened censoring times.

    PubMed

    Hu, X Joan; Rosychuk, Rhonda J

    2016-12-01

    Motivated by an ongoing pediatric mental health care (PMHC) study, this article presents weakly structured methods for analyzing doubly censored recurrent event data where only coarsened information on censoring is available. The study extracted administrative records of emergency department visits from provincial health administrative databases. The available information of each individual subject is limited to a subject-specific time window determined up to concealed data. To evaluate time-dependent effect of exposures, we adapt the local linear estimation with right censored survival times under the Cox regression model with time-varying coefficients (cf. Cai and Sun, Scandinavian Journal of Statistics 2003, 30, 93-111). We establish the pointwise consistency and asymptotic normality of the regression parameter estimator, and examine its performance by simulation. The PMHC study illustrates the proposed approach throughout the article. © 2016, The International Biometric Society.

  20. Broad-scale adaptive genetic variation in alpine plants is driven by temperature and precipitation

    PubMed Central

    MANEL, STÉPHANIE; GUGERLI, FELIX; THUILLER, WILFRIED; ALVAREZ, NADIR; LEGENDRE, PIERRE; HOLDEREGGER, ROLF; GIELLY, LUDOVIC; TABERLET, PIERRE

    2014-01-01

    Identifying adaptive genetic variation is a challenging task, in particular in non-model species for which genomic information is still limited or absent. Here, we studied distribution patterns of amplified fragment length polymorphisms (AFLPs) in response to environmental variation, in 13 alpine plant species consistently sampled across the entire European Alps. Multiple linear regressions were performed between AFLP allele frequencies per site as dependent variables and two categories of independent variables, namely Moran’s eigenvector map MEM variables (to account for spatial and unaccounted environmental variation, and historical demographic processes) and environmental variables. These associations allowed the identification of 153 loci of ecological relevance. Univariate regressions between allele frequency and each environmental factor further showed that loci of ecological relevance were mainly correlated with MEM variables. We found that precipitation and temperature were the best environmental predictors, whereas topographic factors were rarely involved in environmental associations. Climatic factors, subject to rapid variation as a result of the current global warming, are known to strongly influence the fate of alpine plants. Our study shows, for the first time for a large number of species, that the same environmental variables are drivers of plant adaptation at the scale of a whole biome, here the European Alps. PMID:22680783

  1. ERT to aid in WSN based early warning system for landslides

    NASA Astrophysics Data System (ADS)

    T, H.

    2017-12-01

    Amrita University's landslide monitoring and early warning system using Wireless Sensor Networks (WSN) consists of heterogeneous sensors like rain gauge, moisture sensor, piezometer, geophone, inclinometer, tilt meter etc. The information from the sensors are accurate and limited to that point. In order to monitor a large area, ERT can be used in conjunction with WSN technology. To accomplish the feasibility of ERT in landslide early warning along with WSN technology, we have conducted experiments in Amrita's landslide laboratory setup. The experiment was aimed to simulate landslide, and monitor the changes happening in the soil using moisture sensor and ERT. Simulating moisture values from resistivity measurements to a greater accuracy can help in landslide monitoring for large areas. For accomplishing the same we have adapted two mathematical approaches, 1) Regression analysis between resistivity measurements and actual moisture values from moisture sensor, and 2) Using Waxman Smith model to simulate moisture values from resistivity measurements. The simulated moisture values from Waxman Smith model is compared with the actual moisture values and the Mean Square Error (MSE) is found to be 46.33. Regression curve is drawn for the resistivity vs simulated moisture values from Waxman model, and it is compared with the regression curve of actual model, which is shown in figure-1. From figure-1, it is clear that there the regression curve from actual moisture values and the regression curve from simulated moisture values, follow the similar pattern and there is a small difference between them. Moisture values can be simulated to a greater accuracy using actual regression equation, but the limitation is that, regression curves will differ for different sites and different soils. Regression equation from actual moisture values can be used, if we have conducted experiment in the laboratory for a particular soil sample, otherwise with the knowledge of soil properties, Waxman model can be used to simulate moisture values. The promising results assure that, ERT measurements when used in conjunction with WSN technique, vital paramters triggering landslides like moisture can be simulated for a large area, which will help in providing early warning for large areas.

  2. R package PRIMsrc: Bump Hunting by Patient Rule Induction Method for Survival, Regression and Classification

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326

  3. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  4. Evolution of an adaptive behavior and its sensory receptors promotes eye regression in blind cavefish.

    PubMed

    Yoshizawa, Masato; Yamamoto, Yoshiyuki; O'Quin, Kelly E; Jeffery, William R

    2012-12-27

    How and why animals lose eyesight during adaptation to the dark and food-limited cave environment has puzzled biologists since the time of Darwin. More recently, several different adaptive hypotheses have been proposed to explain eye degeneration based on studies in the teleost Astyanax mexicanus, which consists of blind cave-dwelling (cavefish) and sighted surface-dwelling (surface fish) forms. One of these hypotheses is that eye regression is the result of indirect selection for constructive characters that are negatively linked to eye development through the pleiotropic effects of Sonic Hedgehog (SHH) signaling. However, subsequent genetic analyses suggested that other mechanisms also contribute to eye regression in Astyanax cavefish. Here, we introduce a new approach to this problem by investigating the phenotypic and genetic relationships between a suite of non-visual constructive traits and eye regression. Using quantitative genetic analysis of crosses between surface fish, the Pachón cavefish population and their hybrid progeny, we show that the adaptive vibration attraction behavior (VAB) and its sensory receptors, superficial neuromasts (SN) specifically found within the cavefish eye orbit (EO), are genetically correlated with reduced eye size. The quantitative trait loci (QTL) for these three traits form two clusters of congruent or overlapping QTL on Astyanax linkage groups (LG) 2 and 17, but not at the shh locus on LG 13. Ablation of EO SN in cavefish demonstrated a major role for these sensory receptors in VAB expression. Furthermore, experimental induction of eye regression in surface fish via shh overexpression showed that the absence of eyes was insufficient to promote the appearance of VAB or EO SN. We conclude that natural selection for the enhancement of VAB and EO SN indirectly promotes eye regression in the Pachón cavefish population through an antagonistic relationship involving genetic linkage or pleiotropy among the genetic factors underlying these traits. This study demonstrates a trade-off between the evolution of a non-visual sensory system and eye regression during the adaptive evolution of Astyanax to the cave environment.

  5. Evolution and development in cave animals: from fish to crustaceans.

    PubMed

    Protas, Meredith; Jeffery, William R

    2012-01-01

    Cave animals are excellent models to study the general principles of evolution as well as the mechanisms of adaptation to a novel environment: the perpetual darkness of caves. In this article, two of the major model systems used to study the evolution and development (evo-devo) of cave animals are described: the teleost fish Astyanax mexicanus and the isopod crustacean Asellus aquaticus. The ways in which these animals match the major attributes expected of an evo-devo cave animal model system are described. For both species, we enumerate the regressive and constructive troglomorphic traits that have evolved during their adaptation to cave life, the developmental and genetic basis of these traits, the possible evolutionary forces responsible for them, and potential new areas in which these model systems could be used for further exploration of the evolution of cave animals. Furthermore, we compare the two model cave animals to investigate the mechanisms of troglomorphic evolution. Finally, we propose a few other cave animal systems that would be suitable for development as additional models to obtain a more comprehensive understanding of the developmental and genetic mechanisms involved in troglomorphic evolution.

  6. A SIGNIFICANCE TEST FOR THE LASSO1

    PubMed Central

    Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert

    2014-01-01

    In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062

  7. Short-term electric power demand forecasting based on economic-electricity transmission model

    NASA Astrophysics Data System (ADS)

    Li, Wenfeng; Bai, Hongkun; Liu, Wei; Liu, Yongmin; Wang, Yubin Mao; Wang, Jiangbo; He, Dandan

    2018-04-01

    Short-term electricity demand forecasting is the basic work to ensure safe operation of the power system. In this paper, a practical economic electricity transmission model (EETM) is built. With the intelligent adaptive modeling capabilities of Prognoz Platform 7.2, the econometric model consists of three industrial added value and income levels is firstly built, the electricity demand transmission model is also built. By multiple regression, moving averages and seasonal decomposition, the problem of multiple correlations between variables is effectively overcome in EETM. The validity of EETM is proved by comparison with the actual value of Henan Province. Finally, EETM model is used to forecast the electricity consumption of the 1-4 quarter of 2018.

  8. New model for estimating the relationship between surface area and volume in the human body using skeletal remains.

    PubMed

    Kasabova, Boryana E; Holliday, Trenton W

    2015-04-01

    A new model for estimating human body surface area and body volume/mass from standard skeletal metrics is presented. This model is then tested against both 1) "independently estimated" body surface areas and "independently estimated" body volume/mass (both derived from anthropometric data) and 2) the cylindrical model of Ruff. The model is found to be more accurate in estimating both body surface area and body volume/mass than the cylindrical model, but it is more accurate in estimating body surface area than it is for estimating body volume/mass (as reflected by the standard error of the estimate when "independently estimated" surface area or volume/mass is regressed on estimates derived from the present model). Two practical applications of the model are tested. In the first test, the relative contribution of the limbs versus the trunk to the body's volume and surface area is compared between "heat-adapted" and "cold-adapted" populations. As expected, the "cold-adapted" group has significantly more of its body surface area and volume in its trunk than does the "heat-adapted" group. In the second test, we evaluate the effect of variation in bi-iliac breadth, elongated or foreshortened limbs, and differences in crural index on the body's surface area to volume ratio (SA:V). Results indicate that the effects of bi-iliac breadth on SA:V are substantial, while those of limb lengths and (especially) the crural index are minor, which suggests that factors other than surface area relative to volume are driving morphological variation and ecogeographical patterning in limb prorportions. © 2014 Wiley Periodicals, Inc.

  9. Method selection and adaptation for distributed monitoring of infectious diseases for syndromic surveillance.

    PubMed

    Xing, Jian; Burkom, Howard; Tokars, Jerome

    2011-12-01

    Automated surveillance systems require statistical methods to recognize increases in visit counts that might indicate an outbreak. In prior work we presented methods to enhance the sensitivity of C2, a commonly used time series method. In this study, we compared the enhanced C2 method with five regression models. We used emergency department chief complaint data from US CDC BioSense surveillance system, aggregated by city (total of 206 hospitals, 16 cities) during 5/2008-4/2009. Data for six syndromes (asthma, gastrointestinal, nausea and vomiting, rash, respiratory, and influenza-like illness) was used and was stratified by mean count (1-19, 20-49, ≥50 per day) into 14 syndrome-count categories. We compared the sensitivity for detecting single-day artificially-added increases in syndrome counts. Four modifications of the C2 time series method, and five regression models (two linear and three Poisson), were tested. A constant alert rate of 1% was used for all methods. Among the regression models tested, we found that a Poisson model controlling for the logarithm of total visits (i.e., visits both meeting and not meeting a syndrome definition), day of week, and 14-day time period was best. Among 14 syndrome-count categories, time series and regression methods produced approximately the same sensitivity (<5% difference) in 6; in six categories, the regression method had higher sensitivity (range 6-14% improvement), and in two categories the time series method had higher sensitivity. When automated data are aggregated to the city level, a Poisson regression model that controls for total visits produces the best overall sensitivity for detecting artificially added visit counts. This improvement was achieved without increasing the alert rate, which was held constant at 1% for all methods. These findings will improve our ability to detect outbreaks in automated surveillance system data. Published by Elsevier Inc.

  10. The PIT-trap-A "model-free" bootstrap procedure for inference about regression models with discrete, multivariate responses.

    PubMed

    Warton, David I; Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.

  11. The PIT-trap—A “model-free” bootstrap procedure for inference about regression models with discrete, multivariate responses

    PubMed Central

    Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)—common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of “model-free bootstrap”, adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods. PMID:28738071

  12. Distinct Patterns of Desynchronized Limb Regression in Malagasy Scincine Lizards (Squamata, Scincidae)

    PubMed Central

    Miralles, Aurélien; Hipsley, Christy A.; Erens, Jesse; Gehara, Marcelo; Rakotoarison, Andolalao; Glaw, Frank; Müller, Johannes; Vences, Miguel

    2015-01-01

    Scincine lizards in Madagascar form an endemic clade of about 60 species exhibiting a variety of ecomorphological adaptations. Several subclades have adapted to burrowing and convergently regressed their limbs and eyes, resulting in a variety of partial and completely limbless morphologies among extant taxa. However, patterns of limb regression in these taxa have not been studied in detail. Here we fill this gap in knowledge by providing a phylogenetic analysis of DNA sequences of three mitochondrial and four nuclear gene fragments in an extended sampling of Malagasy skinks, and microtomographic analyses of osteology of various burrowing taxa adapted to sand substrate. Based on our data we propose to (i) consider Sirenoscincus Sakata & Hikida, 2003, as junior synonym of Voeltzkowia Boettger, 1893; (ii) resurrect the genus name Grandidierina Mocquard, 1894, for four species previously included in Voeltzkowia; and (iii) consider Androngo Brygoo, 1982, as junior synonym of Pygomeles Grandidier, 1867. By supporting the clade consisting of the limbless Voeltzkowia mira and the forelimb-only taxa V. mobydick and V. yamagishii, our data indicate that full regression of limbs and eyes occurred in parallel twice in the genus Voeltzkowia (as hitherto defined) that we consider as a sand-swimming ecomorph: in the Voeltzkowia clade sensu stricto the regression first affected the hindlimbs and subsequently the forelimbs, whereas the Grandidierina clade first regressed the forelimbs and subsequently the hindlimbs following the pattern prevalent in squamates. Timetree reconstructions for the Malagasy Scincidae contain a substantial amount of uncertainty due to the absence of suitable primary fossil calibrations. However, our preliminary reconstructions suggest rapid limb regression in Malagasy scincids with an estimated maximal duration of 6 MYr for a complete regression in Paracontias, and 4 and 8 MYr respectively for complete regression of forelimbs in Grandidierina and hindlimbs in Voeltzkowia. PMID:26042667

  13. Distinct patterns of desynchronized limb regression in malagasy scincine lizards (squamata, scincidae).

    PubMed

    Miralles, Aurélien; Hipsley, Christy A; Erens, Jesse; Gehara, Marcelo; Rakotoarison, Andolalao; Glaw, Frank; Müller, Johannes; Vences, Miguel

    2015-01-01

    Scincine lizards in Madagascar form an endemic clade of about 60 species exhibiting a variety of ecomorphological adaptations. Several subclades have adapted to burrowing and convergently regressed their limbs and eyes, resulting in a variety of partial and completely limbless morphologies among extant taxa. However, patterns of limb regression in these taxa have not been studied in detail. Here we fill this gap in knowledge by providing a phylogenetic analysis of DNA sequences of three mitochondrial and four nuclear gene fragments in an extended sampling of Malagasy skinks, and microtomographic analyses of osteology of various burrowing taxa adapted to sand substrate. Based on our data we propose to (i) consider Sirenoscincus Sakata & Hikida, 2003, as junior synonym of Voeltzkowia Boettger, 1893; (ii) resurrect the genus name Grandidierina Mocquard, 1894, for four species previously included in Voeltzkowia; and (iii) consider Androngo Brygoo, 1982, as junior synonym of Pygomeles Grandidier, 1867. By supporting the clade consisting of the limbless Voeltzkowia mira and the forelimb-only taxa V. mobydick and V. yamagishii, our data indicate that full regression of limbs and eyes occurred in parallel twice in the genus Voeltzkowia (as hitherto defined) that we consider as a sand-swimming ecomorph: in the Voeltzkowia clade sensu stricto the regression first affected the hindlimbs and subsequently the forelimbs, whereas the Grandidierina clade first regressed the forelimbs and subsequently the hindlimbs following the pattern prevalent in squamates. Timetree reconstructions for the Malagasy Scincidae contain a substantial amount of uncertainty due to the absence of suitable primary fossil calibrations. However, our preliminary reconstructions suggest rapid limb regression in Malagasy scincids with an estimated maximal duration of 6 MYr for a complete regression in Paracontias, and 4 and 8 MYr respectively for complete regression of forelimbs in Grandidierina and hindlimbs in Voeltzkowia.

  14. Texture-preserved penalized weighted least-squares reconstruction of low-dose CT image via image segmentation and high-order MRF modeling

    NASA Astrophysics Data System (ADS)

    Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong

    2016-03-01

    In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.

  15. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  16. State-space decoding of primary afferent neuron firing rates

    NASA Astrophysics Data System (ADS)

    Wagenaar, J. B.; Ventura, V.; Weber, D. J.

    2011-02-01

    Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent (PA) neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, reverse regression does not make efficient use of the information embedded in the firing rates of the neural population. In this paper, we present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of PA neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. We show that, on average, state-space decoding is twice as efficient as reverse regression for decoding joint and endpoint kinematics.

  17. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    PubMed

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  18. VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA

    PubMed Central

    Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu

    2009-01-01

    We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190

  19. Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI

    PubMed Central

    Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.

    2016-01-01

    We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524

  20. Adaptable history biases in human perceptual decisions.

    PubMed

    Abrahamyan, Arman; Silva, Laura Luz; Dakin, Steven C; Carandini, Matteo; Gardner, Justin L

    2016-06-21

    When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject's default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.

  1. Relationships between Adaptive Behaviours, Personal Factors, and Participation of Young Children.

    PubMed

    Killeen, Hazel; Shiel, Agnes; Law, Mary; O'Donovan, Donough J; Segurado, Ricardo; Anaby, Dana

    2017-12-19

    To examine the extent to which personal factors (age, socioeconomic grouping, and preterm birth) and adaptive behaviour explain the participation patterns of young children. 65 Children 2-5 years old with and without a history of preterm birth and no physical or intellectual disability were selected by convenience sampling from Galway University Hospital, Ireland. Interviews with parents were conducted using the Adaptive Behaviour Assessment System, Second Edition (ABAS-II) and the Assessment of Preschool Children's Participation (APCP). Linear regression models were used to identify associations between the ABAS-II scores, personal factors, and APCP scores for intensity and diversity of participation. Adaptive behaviour explained 21% of variance in intensity of play, 18% in intensity of Skill Development, 7% in intensity of Active Physical Recreation, and 6% in intensity of Social Activities controlling for age, preterm birth, and socioeconomic grouping. Age explained between 1% and 11% of variance in intensity of participation scores. Adapted behaviour (13%), Age (17%), and socioeconomic grouping (5%) explained a significant percentage of variance in diversity of participation controlling for the other variables.  Adaptive behaviour had a unique contribution to children's intensity and diversity of participation, suggesting its importance.

  2. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  3. SU-D-BRB-05: Quantum Learning for Knowledge-Based Response-Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Naqa, I; Ten, R

    Purpose: There is tremendous excitement in radiotherapy about applying data-driven methods to develop personalized clinical decisions for real-time response-based adaptation. However, classical statistical learning methods lack in terms of efficiency and ability to predict outcomes under conditions of uncertainty and incomplete information. Therefore, we are investigating physics-inspired machine learning approaches by utilizing quantum principles for developing a robust framework to dynamically adapt treatments to individual patient’s characteristics and optimize outcomes. Methods: We studied 88 liver SBRT patients with 35 on non-adaptive and 53 on adaptive protocols. Adaptation was based on liver function using a split-course of 3+2 fractions with amore » month break. The radiotherapy environment was modeled as a Markov decision process (MDP) of baseline and one month into treatment states. The patient environment was modeled by a 5-variable state represented by patient’s clinical and dosimetric covariates. For comparison of classical and quantum learning methods, decision-making to adapt at one month was considered. The MDP objective was defined by the complication-free tumor control (P{sup +}=TCPx(1-NTCP)). A simple regression model represented state-action mapping. Single bit in classical MDP and a qubit of 2-superimposed states in quantum MDP represented the decision actions. Classical decision selection was done using reinforcement Q-learning and quantum searching was performed using Grover’s algorithm, which applies uniform superposition over possible states and yields quadratic speed-up. Results: Classical/quantum MDPs suggested adaptation (probability amplitude ≥0.5) 79% of the time for splitcourses and 100% for continuous-courses. However, the classical MDP had an average adaptation probability of 0.5±0.22 while the quantum algorithm reached 0.76±0.28. In cases where adaptation failed, classical MDP yielded 0.31±0.26 average amplitude while the quantum approach averaged a more optimistic 0.57±0.4, but with high phase fluctuations. Conclusion: Our results demonstrate that quantum machine learning approaches provide a feasible and promising framework for real-time and sequential clinical decision-making in adaptive radiotherapy.« less

  4. Teacher consultation and coaching within mental health practice: classroom and child effects in urban elementary schools.

    PubMed

    Cappella, Elise; Hamre, Bridget K; Kim, Ha Yeon; Henry, David B; Frazier, Stacy L; Atkins, Marc S; Schoenwald, Sonja K

    2012-08-01

    To examine effects of a teacher consultation and coaching program delivered by school and community mental health professionals on change in observed classroom interactions and child functioning across one school year. Thirty-six classrooms within 5 urban elementary schools (87% Latino, 11% Black) were randomly assigned to intervention (training + consultation/coaching) and control (training only) conditions. Classroom and child outcomes (n = 364; 43% girls) were assessed in the fall and spring. Random effects regression models showed main effects of intervention on teacher-student relationship closeness, academic self-concept, and peer victimization. Results of multiple regression models showed levels of observed teacher emotional support in the fall moderated intervention impact on emotional support at the end of the school year. Results suggest teacher consultation and coaching can be integrated within existing mental health activities in urban schools and impact classroom effectiveness and child adaptation across multiple domains. © 2012 American Psychological Association

  5. Discrete wavelength selection for the optical readout of a metamaterial biosensing system for glucose concentration estimation via a support vector regression model.

    PubMed

    Teutsch, T; Mesch, M; Giessen, H; Tarin, C

    2015-01-01

    In this contribution, a method to select discrete wavelengths that allow an accurate estimation of the glucose concentration in a biosensing system based on metamaterials is presented. The sensing concept is adapted to the particular application of ophthalmic glucose sensing by covering the metamaterial with a glucose-sensitive hydrogel and the sensor readout is performed optically. Due to the fact that in a mobile context a spectrometer is not suitable, few discrete wavelengths must be selected to estimate the glucose concentration. The developed selection methods are based on nonlinear support vector regression (SVR) models. Two selection methods are compared and it is shown that wavelengths selected by a sequential forward feature selection algorithm achieves an estimation improvement. The presented method can be easily applied to different metamaterial layouts and hydrogel configurations.

  6. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  7. Use of Two-Part Regression Calibration Model to Correct for Measurement Error in Episodically Consumed Foods in a Single-Replicate Study Design: EPIC Case Study

    PubMed Central

    Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487

  8. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    PubMed

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  9. An Efficient Recommendation Filter Model on Smart Home Big Data Analytics for Enhanced Living Environments.

    PubMed

    Chen, Hao; Xie, Xiaoyun; Shu, Wanneng; Xiong, Naixue

    2016-10-15

    With the rapid growth of wireless sensor applications, the user interfaces and configurations of smart homes have become so complicated and inflexible that users usually have to spend a great amount of time studying them and adapting to their expected operation. In order to improve user experience, a weighted hybrid recommender system based on a Kalman Filter model is proposed to predict what users might want to do next, especially when users are located in a smart home with an enhanced living environment. Specifically, a weight hybridization method was introduced, which combines contextual collaborative filter and the contextual content-based recommendations. This method inherits the advantages of the optimum regression and the stability features of the proposed adaptive Kalman Filter model, and it can predict and revise the weight of each system component dynamically. Experimental results show that the hybrid recommender system can optimize the distribution of weights of each component, and achieve more reasonable recall and precision rates.

  10. An Efficient Recommendation Filter Model on Smart Home Big Data Analytics for Enhanced Living Environments

    PubMed Central

    Chen, Hao; Xie, Xiaoyun; Shu, Wanneng; Xiong, Naixue

    2016-01-01

    With the rapid growth of wireless sensor applications, the user interfaces and configurations of smart homes have become so complicated and inflexible that users usually have to spend a great amount of time studying them and adapting to their expected operation. In order to improve user experience, a weighted hybrid recommender system based on a Kalman Filter model is proposed to predict what users might want to do next, especially when users are located in a smart home with an enhanced living environment. Specifically, a weight hybridization method was introduced, which combines contextual collaborative filter and the contextual content-based recommendations. This method inherits the advantages of the optimum regression and the stability features of the proposed adaptive Kalman Filter model, and it can predict and revise the weight of each system component dynamically. Experimental results show that the hybrid recommender system can optimize the distribution of weights of each component, and achieve more reasonable recall and precision rates. PMID:27754456

  11. Acute adaptation of the vestibuloocular reflex: signal processing by floccular and ventral parafloccular Purkinje cells.

    PubMed

    Hirata, Y; Highstein, S M

    2001-05-01

    The gain of the vertical vestibuloocular reflex (VVOR), defined as eye velocity/head velocity was adapted in squirrel monkeys by employing visual-vestibular mismatch stimuli. VVOR gain, measured in the dark, could be trained to values between 0.4 and 1.5. Single-unit activity of vertical zone Purkinje cells was recorded from the flocculus and ventral paraflocculus in alert squirrel monkeys before and during the gain change training. Our goal was to evaluate the site(s) of learning of the gain change. To aid in the evaluation, a model of the vertical optokinetic reflex (VOKR) and VVOR was constructed consisting of floccular and nonfloccular systems divided into subsystems based on the known anatomy and input and output parameters. Three kinds of input to floccular Purkinje cells via mossy fibers were explicitly described, namely vestibular, visual (retinal slip), and efference copy of eye movement. The characteristics of each subsystem (gain and phase) were identified at different VOR gains by reconstructing single-unit activity of Purkinje cells during VOKR and VVOR with multiple linear regression models consisting of sensory input and motor output signals. Model adequacy was checked by evaluating the residual following the regressions and by predicting Purkinje cells' activity during visual-vestibular mismatch paradigms. As a result, parallel changes in identified characteristics with VVOR adaptation were found in the prefloccular/floccular subsystem that conveys vestibular signals and in the nonfloccular subsystem that conveys vestibular signals, while no change was found in other subsystems, namely prefloccular/floccular subsystems conveying efference copy or visual signals, nonfloccular subsystem conveying visual signals, and postfloccular subsystem transforming Purkinje cell activity to eye movements. The result suggests multiple sites for VVOR motor learning including both flocculus and nonflocculus pathways. The gain change in the nonfloccular vestibular subsystem was in the correct direction to cause VOR gain adaptation while the change in the prefloccular/floccular vestibular subsystem was incorrect (anti-compensatory). This apparent incorrect directional change might serve to prevent instability of the VOR caused by positive feedback via the efference copy pathway.

  12. Multiple Imputation For Combined-Survey Estimation With Incomplete Regressors In One But Not Both Surveys

    PubMed Central

    Rendall, Michael S.; Ghosh-Dastidar, Bonnie; Weden, Margaret M.; Baker, Elizabeth H.; Nazarov, Zafar

    2013-01-01

    Within-survey multiple imputation (MI) methods are adapted to pooled-survey regression estimation where one survey has more regressors, but typically fewer observations, than the other. This adaptation is achieved through: (1) larger numbers of imputations to compensate for the higher fraction of missing values; (2) model-fit statistics to check the assumption that the two surveys sample from a common universe; and (3) specificying the analysis model completely from variables present in the survey with the larger set of regressors, thereby excluding variables never jointly observed. In contrast to the typical within-survey MI context, cross-survey missingness is monotonic and easily satisfies the Missing At Random (MAR) assumption needed for unbiased MI. Large efficiency gains and substantial reduction in omitted variable bias are demonstrated in an application to sociodemographic differences in the risk of child obesity estimated from two nationally-representative cohort surveys. PMID:24223447

  13. The physical, behavioral, and psychosocial consequences of Internet use in college students.

    PubMed

    Clark, Deborah J; Frith, Karen H; Demi, Alice S

    2004-01-01

    The purposes of this study were to identify the physical, behavioral, and psychosocial consequences of Internet use in undergraduate college students; and to evaluate whether time, social norms, and adopter category predict the consequences of Internet use. Rogers' model for studying consequences of innovation was adapted for this study. A descriptive, correlational design was used. Convenience sampling yielded 293 undergraduate students who answered the online survey. Consequences of Internet use were assessed with the researcher-developed instrument, the Internet Consequences Scale (ICONS). Mean scores on the behavioral and psychosocial subscales of the ICONS indicated positive consequences of Internet use, while the physical consequences subscale revealed negative consequences. Multiple regression analyses revealed a small, but significant, amount of variance in consequences of Internet use that could be explained by time, social norms, and adopter category, thus supporting the adapted model for the study of consequences of Internet use in college students.

  14. Conflict in a changing climate

    NASA Astrophysics Data System (ADS)

    Carleton, T.; Hsiang, S. M.; Burke, M.

    2016-05-01

    A growing body of research illuminates the role that changes in climate have had on violent conflict and social instability in the recent past. Across a diversity of contexts, high temperatures and irregular rainfall have been causally linked to a range of conflict outcomes. These findings can be paired with climate model output to generate projections of the impact future climate change may have on conflicts such as crime and civil war. However, there are large degrees of uncertainty in such projections, arising from (i) the statistical uncertainty involved in regression analysis, (ii) divergent climate model predictions, and (iii) the unknown ability of human societies to adapt to future climate change. In this article, we review the empirical evidence of the climate-conflict relationship, provide insight into the likely extent and feasibility of adaptation to climate change as it pertains to human conflict, and discuss new methods that can be used to provide projections that capture these three sources of uncertainty.

  15. Nonlinear dynamics of team performance and adaptability in emergency response.

    PubMed

    Guastello, Stephen J

    2010-04-01

    The impact of team size and performance feedback on adaptation levels and performance of emergency response (ER) teams was examined to introduce a metric for quantifying adaptation levels based on nonlinear dynamical systems (NDS) theory. NDS principles appear in reports surrounding Hurricane Katrina, earthquakes, floods, a disease epidemic, and the Southeast Asian tsunami. They are also intrinsic to coordination within teams, adaptation levels, and performance in dynamic decision processes. Performance was measured in a dynamic decision task in which ER teams of different sizes worked against an attacker who was trying to destroy a city (total N = 225 undergraduates). The complexity of teams' and attackers' adaptation strategies and the role of the opponents' performance were assessed by nonlinear regression analysis. An optimal group size for team performance was identified. Teams were more readily influenced by the attackers' performance than vice versa. The adaptive capabilities of attackers and teams were impaired by their opponents in some conditions. ER teams should be large enough to contribute a critical mass of ideas but not so large that coordination would be compromised. ER teams used self-organized strategies that could have been more adaptive, whereas attackers used chaotic strategies. The model and results are applicable to ER processes or training maneuvers involving dynamic decisions but could be limited to nonhierarchical groups.

  16. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  17. Machine Learning Intermolecular Potentials for 1,3,5-Triamino-2,4,6-trinitrobenzene (TATB) Using Symmetry-Adapted Perturbation Theory

    DTIC Science & Technology

    2018-04-25

    unlimited. NOTICES Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so...this report, intermolecular potentials for 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) are developed using machine learning techniques. Three...potentials based on support vector regression, kernel ridge regression, and a neural network are fit using symmetry-adapted perturbation theory. The

  18. Development of Ensemble Model Based Water Demand Forecasting Model

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop

    2014-05-01

    In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)

  19. Molecular proxies for climate maladaptation in a long-lived tree (Pinus pinaster Aiton, Pinaceae).

    PubMed

    Jaramillo-Correa, Juan-Pablo; Rodríguez-Quilón, Isabel; Grivet, Delphine; Lepoittevin, Camille; Sebastiani, Federico; Heuertz, Myriam; Garnier-Géré, Pauline H; Alía, Ricardo; Plomion, Christophe; Vendramin, Giovanni G; González-Martínez, Santiago C

    2015-03-01

    Understanding adaptive genetic responses to climate change is a main challenge for preserving biological diversity. Successful predictive models for climate-driven range shifts of species depend on the integration of information on adaptation, including that derived from genomic studies. Long-lived forest trees can experience substantial environmental change across generations, which results in a much more prominent adaptation lag than in annual species. Here, we show that candidate-gene SNPs (single nucleotide polymorphisms) can be used as predictors of maladaptation to climate in maritime pine (Pinus pinaster Aiton), an outcrossing long-lived keystone tree. A set of 18 SNPs potentially associated with climate, 5 of them involving amino acid-changing variants, were retained after performing logistic regression, latent factor mixed models, and Bayesian analyses of SNP-climate correlations. These relationships identified temperature as an important adaptive driver in maritime pine and highlighted that selective forces are operating differentially in geographically discrete gene pools. The frequency of the locally advantageous alleles at these selected loci was strongly correlated with survival in a common garden under extreme (hot and dry) climate conditions, which suggests that candidate-gene SNPs can be used to forecast the likely destiny of natural forest ecosystems under climate change scenarios. Differential levels of forest decline are anticipated for distinct maritime pine gene pools. Geographically defined molecular proxies for climate adaptation will thus critically enhance the predictive power of range-shift models and help establish mitigation measures for long-lived keystone forest trees in the face of impending climate change. Copyright © 2015 by the Genetics Society of America.

  20. Molecular Proxies for Climate Maladaptation in a Long-Lived Tree (Pinus pinaster Aiton, Pinaceae)

    PubMed Central

    Jaramillo-Correa, Juan-Pablo; Rodríguez-Quilón, Isabel; Grivet, Delphine; Lepoittevin, Camille; Sebastiani, Federico; Heuertz, Myriam; Garnier-Géré, Pauline H.; Alía, Ricardo; Plomion, Christophe; Vendramin, Giovanni G.; González-Martínez, Santiago C.

    2015-01-01

    Understanding adaptive genetic responses to climate change is a main challenge for preserving biological diversity. Successful predictive models for climate-driven range shifts of species depend on the integration of information on adaptation, including that derived from genomic studies. Long-lived forest trees can experience substantial environmental change across generations, which results in a much more prominent adaptation lag than in annual species. Here, we show that candidate-gene SNPs (single nucleotide polymorphisms) can be used as predictors of maladaptation to climate in maritime pine (Pinus pinaster Aiton), an outcrossing long-lived keystone tree. A set of 18 SNPs potentially associated with climate, 5 of them involving amino acid-changing variants, were retained after performing logistic regression, latent factor mixed models, and Bayesian analyses of SNP–climate correlations. These relationships identified temperature as an important adaptive driver in maritime pine and highlighted that selective forces are operating differentially in geographically discrete gene pools. The frequency of the locally advantageous alleles at these selected loci was strongly correlated with survival in a common garden under extreme (hot and dry) climate conditions, which suggests that candidate-gene SNPs can be used to forecast the likely destiny of natural forest ecosystems under climate change scenarios. Differential levels of forest decline are anticipated for distinct maritime pine gene pools. Geographically defined molecular proxies for climate adaptation will thus critically enhance the predictive power of range-shift models and help establish mitigation measures for long-lived keystone forest trees in the face of impending climate change. PMID:25549630

  1. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  2. Neurocognitive, Social-Behavioral, and Adaptive Functioning in Preschool Children with Mild to Moderate Kidney Disease

    PubMed Central

    Hooper, Stephen R.; Gerson, Arlene C.; Johnson, Rebecca J.; Mendley, Susan R.; Shinnar, Shlomo; Lande, Marc B.; Matheson, Matthew B.; Gipson, Debbie S.; Morgenstern, Bruce; Warady, Bradley A.; Furth, Susan L.

    2016-01-01

    Objective The negative impact of End Stage Kidney Disease on cognitive function in children is well established, but no studies have examined the neurocognitive, social-behavioral, and adaptive behavior skills of preschool children with mild to moderate chronic kidney disease (CKD). Methods Participants included 124 preschool children with mild to moderate CKD, ages 12-68 months (median=3.7 years), and an associated mean glomerular filtration rate (GFR) of 50.0 ml/min per 1.73m2. In addition to level of function and percent of participants scoring≥1SD below the test mean, regression models examined the associations between biomarkers of CKD (GFR, anemia, hypertension, seizures, abnormal birth history), and Developmental Level/IQ, attention regulation, and parent ratings of executive functions, social-behavior, and adaptive behaviors. Results Median scores for all measures were in the average range; however, 27% were deemed at-risk for a Developmental Level/IQ<85, 20% were at-risk for attention variability, and parent ratings indicated 30% and 37% to be at-risk for executive dysfunction and adaptive behavior problems, respectively. Approximately 43% were deemed at-risk on two or more measures. None of the disease-related variables were significantly associated with these outcomes, although the presence of hypertension approached significance for attention variability (p<.09). Abnormal birth history and lower maternal education were significantly related to lower Developmental Level/IQ; seizures were related to lower parental ratings of executive function and adaptive behavior; and abnormal birth history was significantly related to lower ratings of adaptive behavior. When predicting risk status, the logistic regression did evidence both higher GFR and the lack of anemia to be associated with more intact Developmental Level/IQ. Conclusions These findings suggest relatively intact functioning for preschool children with mild to moderate CKD, but the need for ongoing developmental surveillance in this population remains warranted, particularly for those with abnormal birth histories, seizures, and heightened disease severity. PMID:26890559

  3. Adaptive data-driven models for estimating carbon fluxes in the Northern Great Plains

    USGS Publications Warehouse

    Wylie, B.K.; Fosnight, E.A.; Gilmanov, T.G.; Frank, A.B.; Morgan, J.A.; Haferkamp, Marshall R.; Meyers, T.P.

    2007-01-01

    Rangeland carbon fluxes are highly variable in both space and time. Given the expansive areas of rangelands, how rangelands respond to climatic variation, management, and soil potential is important to understanding carbon dynamics. Rangeland carbon fluxes associated with Net Ecosystem Exchange (NEE) were measured from multiple year data sets at five flux tower locations in the Northern Great Plains. These flux tower measurements were combined with 1-km2 spatial data sets of Photosynthetically Active Radiation (PAR), Normalized Difference Vegetation Index (NDVI), temperature, precipitation, seasonal NDVI metrics, and soil characteristics. Flux tower measurements were used to train and select variables for a rule-based piece-wise regression model. The accuracy and stability of the model were assessed through random cross-validation and cross-validation by site and year. Estimates of NEE were produced for each 10-day period during each growing season from 1998 to 2001. Growing season carbon flux estimates were combined with winter flux estimates to derive and map annual estimates of NEE. The rule-based piece-wise regression model is a dynamic, adaptive model that captures the relationships of the spatial data to NEE as conditions evolve throughout the growing season. The carbon dynamics in the Northern Great Plains proved to be in near equilibrium, serving as a small carbon sink in 1999 and as a small carbon source in 1998, 2000, and 2001. Patterns of carbon sinks and sources are very complex, with the carbon dynamics tilting toward sources in the drier west and toward sinks in the east and near the mountains in the extreme west. Significant local variability exists, which initial investigations suggest are likely related to local climate variability, soil properties, and management.

  4. Evolutionary grinding model for nanometric control of surface roughness for aspheric optical surfaces.

    PubMed

    Han, Jeong-Yeol; Kim, Sug-Whan; Han, Inwoo; Kim, Geon-Hee

    2008-03-17

    A new evolutionary grinding process model has been developed for nanometric control of material removal from an aspheric surface of Zerodur substrate. The model incorporates novel control features such as i) a growing database; ii) an evolving, multi-variable regression equation; and iii) an adaptive correction factor for target surface roughness (Ra) for the next machine run. This process model demonstrated a unique evolutionary controllability of machining performance resulting in the final grinding accuracy (i.e. averaged difference between target and measured surface roughness) of -0.2+/-2.3(sigma) nm Ra over seven trial machine runs for the target surface roughness ranging from 115 nm to 64 nm Ra.

  5. Energetic benefits and adaptations in mammalian limbs: Scale effects and selective pressures.

    PubMed

    Kilbourne, Brandon M; Hoffman, Louwrens C

    2015-06-01

    Differences in limb size and shape are fundamental to mammalian morphological diversity; however, their relevance to locomotor costs has long been subject to debate. In particular, it remains unknown if scale effects in whole limb morphology could partially underlie decreasing mass-specific locomotor costs with increasing limb length. Whole fore- and hindlimb inertial properties reflecting limb size and shape-moment of inertia (MOI), mass, mass distribution, and natural frequency-were regressed against limb length for 44 species of quadrupedal mammals. Limb mass, MOI, and center of mass position are negatively allometric, having a strong potential for lowering mass-specific locomotor costs in large terrestrial mammals. Negative allometry of limb MOI results in a 40% reduction in MOI relative to isometry's prediction for our largest sampled taxa. However, fitting regression residuals to adaptive diversification models reveals that codiversification of limb mass, limb length, and body mass likely results from selection for differing locomotor modes of running, climbing, digging, and swimming. The observed allometric scaling does not result from selection for energetically beneficial whole limb morphology with increasing size. Instead, our data suggest that it is a consequence of differing morphological adaptations and body size distributions among quadrupedal mammals, highlighting the role of differing limb functions in mammalian evolution. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  6. Molecular and Neuroendocrine Mechanisms of Avian Seasonal Reproduction.

    PubMed

    Tamai, T Katherine; Yoshimura, Takashi

    2017-01-01

    Animals living outside tropical zones experience seasonal changes in the environment and accordingly, adapt their physiology and behavior in reproduction, molting, and migration. Subtropical birds are excellent models for the study of seasonal reproduction because of their rapid and dramatic response to changes in photoperiod. For example, testicular weight typically changes by more than a 100-fold. In birds, the eyes are not necessary for seasonal reproduction, and light is instead perceived by deep brain photoreceptors. Functional genomic analysis has revealed that long day (LD)-induced thyrotropin from the pars tuberalis of the pituitary gland causes local thyroid hormone (TH) activation within the mediobasal hypothalamus. This local bioactive TH, triiodothyronine (T 3 ), appears to regulate seasonal gonadotropin-releasing hormone (GnRH) secretion through morphological changes in neuro-glial interactions. GnRH, in turn, stimulates gonadotropin secretion and hence, gonadal development under LD conditions. In marked contrast, low temperatures accelerate short day (SD)-induced testicular regression in winter. Interestingly, low temperatures increase circulating levels of T 3 to support adaptive thermogenesis, but this induction of T 3 also triggers the apoptosis of germ cells by activating genes involved in metamorphosis. This apparent contradiction in the role of TH has recently been clarified. Central activation of TH during spring results in testicular growth, while peripheral activation of TH during winter regulates adaptive thermogenesis and testicular regression.

  7. Preliminary Survey on TRY Forest Traits and Growth Index Relations - New Challenges

    NASA Astrophysics Data System (ADS)

    Lyubenova, Mariyana; Kattge, Jens; van Bodegom, Peter; Chikalanov, Alexandre; Popova, Silvia; Zlateva, Plamena; Peteva, Simona

    2016-04-01

    Forest ecosystems provide critical ecosystem goods and services, including food, fodder, water, shelter, nutrient cycling, and cultural and recreational value. Forests also store carbon, provide habitat for a wide range of species and help alleviate land degradation and desertification. Thus they have a potentially significant role to play in climate change adaptation planning through maintaining ecosystem services and providing livelihood options. Therefore the study of forest traits is such an important issue not just for individual countries but for the planet as a whole. We need to know what functional relations between forest traits exactly can express TRY data base and haw it will be significant for the global modeling and IPBES. The study of the biodiversity characteristics at all levels and functional links between them is extremely important for the selection of key indicators for assessing biodiversity and ecosystem services for sustainable natural capital control. By comparing the available information in tree data bases: TRY, ITR (International Tree Ring) and SP-PAM the 42 tree species are selected for the traits analyses. The dependence between location characteristics (latitude, longitude, altitude, annual precipitation, annual temperature and soil type) and forest traits (specific leaf area, leaf weight ratio, wood density and growth index) is studied by by multiply regression analyses (RDA) using the statistical software package Canoco 4.5. The Pearson correlation coefficient (measure of linear correlation), Kendal rank correlation coefficient (non parametric measure of statistical dependence) and Spearman correlation coefficient (monotonic function relationship between two variables) are calculated for each pair of variables (indexes) and species. After analysis of above mentioned correlation coefficients the dimensional linear regression models, multidimensional linear and nonlinear regression models and multidimensional neural networks models are built. The strongest dependence between It and WD was obtained. The research will support the work on: Strategic Plan for Biodiversity 2011-2020, modelling and implementation of ecosystem-based approaches to climate change adaptation and disaster risk reduction. Key words: Specific leaf area (SLA), Leaf weight ratio (LWR), Wood density (WD), Growth index (It)

  8. Assessing the accuracy of ANFIS, EEMD-GRNN, PCR, and MLR models in predicting PM2.5

    NASA Astrophysics Data System (ADS)

    Ausati, Shadi; Amanollahi, Jamil

    2016-10-01

    Since Sanandaj is considered one of polluted cities of Iran, prediction of any type of pollution especially prediction of suspended particles of PM2.5, which are the cause of many diseases, could contribute to health of society by timely announcements and prior to increase of PM2.5. In order to predict PM2.5 concentration in the Sanandaj air the hybrid models consisting of an ensemble empirical mode decomposition and general regression neural network (EEMD-GRNN), Adaptive Neuro-Fuzzy Inference System (ANFIS), principal component regression (PCR), and linear model such as multiple liner regression (MLR) model were used. In these models the data of suspended particles of PM2.5 were the dependent variable and the data related to air quality including PM2.5, PM10, SO2, NO2, CO, O3 and meteorological data including average minimum temperature (Min T), average maximum temperature (Max T), average atmospheric pressure (AP), daily total precipitation (TP), daily relative humidity level of the air (RH) and daily wind speed (WS) for the year 2014 in Sanandaj were the independent variables. Among the used models, EEMD-GRNN model with values of R2 = 0.90, root mean square error (RMSE) = 4.9218 and mean absolute error (MAE) = 3.4644 in the training phase and with values of R2 = 0.79, RMSE = 5.0324 and MAE = 3.2565 in the testing phase, exhibited the best function in predicting this phenomenon. It can be concluded that hybrid models have accurate results to predict PM2.5 concentration compared with linear model.

  9. Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation

    PubMed Central

    Song, Yongsoo; Wang, Shuang; Xia, Yuhou; Jiang, Xiaoqian

    2018-01-01

    Background Learning a model without accessing raw data has been an intriguing idea to security and machine learning researchers for years. In an ideal setting, we want to encrypt sensitive data to store them on a commercial cloud and run certain analyses without ever decrypting the data to preserve privacy. Homomorphic encryption technique is a promising candidate for secure data outsourcing, but it is a very challenging task to support real-world machine learning tasks. Existing frameworks can only handle simplified cases with low-degree polynomials such as linear means classifier and linear discriminative analysis. Objective The goal of this study is to provide a practical support to the mainstream learning models (eg, logistic regression). Methods We adapted a novel homomorphic encryption scheme optimized for real numbers computation. We devised (1) the least squares approximation of the logistic function for accuracy and efficiency (ie, reduce computation cost) and (2) new packing and parallelization techniques. Results Using real-world datasets, we evaluated the performance of our model and demonstrated its feasibility in speed and memory consumption. For example, it took approximately 116 minutes to obtain the training model from the homomorphically encrypted Edinburgh dataset. In addition, it gives fairly accurate predictions on the testing dataset. Conclusions We present the first homomorphically encrypted logistic regression outsourcing model based on the critical observation that the precision loss of classification models is sufficiently small so that the decision plan stays still. PMID:29666041

  10. Lead bioaccumulation in Texas Harvester Ants (Pogonomyrmex barbatus) and toxicological implications for Texas Horned Lizard (Phrynosoma cornutum) populations of Bexar County, Texas.

    PubMed

    Burgess, Robert; Davis, Robert; Edwards, Deborah

    2018-03-01

    Uptake of lead from soil was examined in order to establish a site-specific ecological protective concentration level for the Texas Horned Lizard (Phrynosoma cornutum) at the Former Humble Refinery in San Antonio, Texas. Soils, harvester ants, and rinse water from the ants were analyzed at 11 Texas Harvester Ant (Pogonomyrmex barbatus) mounds. Soil concentrations at the harvester ant mounds ranged from 13 to 7474 mg/kg of lead dry weight. Ant tissue sample concentrations ranged from < 0.82 to 21.17 mg/kg dry weight. Rinse water concentrations were below the reporting limit in the majority of samples. Two uptake models were developed for the ants. A bioaccumulation factor model did not fit the data, as there was a strong decay in the calculated value with rising soil concentrations. A univariate natural log-transformed regression model produced a significant regression (p < .0001) with a high coefficient of determination (0.82), indicating a good fit to the data. Other diagnostic regression statistics indicated that the regression model could be reliably used to predict concentrations of lead in harvester ants from soil concentrations. Estimates of protective levels for P. cornutum were developed using published sub-chronic toxicological findings for the Western Fence Lizard that were allometricly adapted and compared to the Dose oral equation, which estimated lead consumed through ants plus incidental soil ingestion. The no observed adverse effect level toxicological limit for P. cornutum was estimated to be 5500 mg/kg.

  11. Modeling urban coastal flood severity from crowd-sourced flood reports using Poisson regression and Random Forest

    NASA Astrophysics Data System (ADS)

    Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.

    2018-04-01

    Sea level rise has already caused more frequent and severe coastal flooding and this trend will likely continue. Flood prediction is an essential part of a coastal city's capacity to adapt to and mitigate this growing problem. Complex coastal urban hydrological systems however, do not always lend themselves easily to physically-based flood prediction approaches. This paper presents a method for using a data-driven approach to estimate flood severity in an urban coastal setting using crowd-sourced data, a non-traditional but growing data source, along with environmental observation data. Two data-driven models, Poisson regression and Random Forest regression, are trained to predict the number of flood reports per storm event as a proxy for flood severity, given extensive environmental data (i.e., rainfall, tide, groundwater table level, and wind conditions) as input. The method is demonstrated using data from Norfolk, Virginia USA from September 2010 to October 2016. Quality-controlled, crowd-sourced street flooding reports ranging from 1 to 159 per storm event for 45 storm events are used to train and evaluate the models. Random Forest performed better than Poisson regression at predicting the number of flood reports and had a lower false negative rate. From the Random Forest model, total cumulative rainfall was by far the most dominant input variable in predicting flood severity, followed by low tide and lower low tide. These methods serve as a first step toward using data-driven methods for spatially and temporally detailed coastal urban flood prediction.

  12. Ozone levels in the Empty Quarter of Saudi Arabia--application of adaptive neuro-fuzzy model.

    PubMed

    Rahman, Syed Masiur; Khondaker, A N; Khan, Rouf Ahmad

    2013-05-01

    In arid regions, primary pollutants may contribute to the increase of ozone levels and cause negative effects on biotic health. This study investigates the use of adaptive neuro-fuzzy inference system (ANFIS) for ozone prediction. The initial fuzzy inference system is developed by using fuzzy C-means (FCM) and subtractive clustering (SC) algorithms, which determines the important rules, increases generalization capability of the fuzzy inference system, reduces computational needs, and ensures speedy model development. The study area is located in the Empty Quarter of Saudi Arabia, which is considered as a source of huge potential for oil and gas field development. The developed clustering algorithm-based ANFIS model used meteorological data and derived meteorological data, along with NO and NO₂ concentrations and their transformations, as inputs. The root mean square error and Willmott's index of agreement of the FCM- and SC-based ANFIS models are 3.5 ppbv and 0.99, and 8.9 ppbv and 0.95, respectively. Based on the analysis of the performance measures and regression error characteristic curves, it is concluded that the FCM-based ANFIS model outperforms the SC-based ANFIS model.

  13. Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines

    NASA Astrophysics Data System (ADS)

    Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2016-10-01

    Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th

  14. Weighted functional linear regression models for gene-based association analysis.

    PubMed

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  15. The use of machine learning for the identification of peripheral artery disease and future mortality risk.

    PubMed

    Ross, Elsie Gyang; Shah, Nigam H; Dalman, Ronald L; Nead, Kevin T; Cooke, John P; Leeper, Nicholas J

    2016-11-01

    A key aspect of the precision medicine effort is the development of informatics tools that can analyze and interpret "big data" sets in an automated and adaptive fashion while providing accurate and actionable clinical information. The aims of this study were to develop machine learning algorithms for the identification of disease and the prognostication of mortality risk and to determine whether such models perform better than classical statistical analyses. Focusing on peripheral artery disease (PAD), patient data were derived from a prospective, observational study of 1755 patients who presented for elective coronary angiography. We employed multiple supervised machine learning algorithms and used diverse clinical, demographic, imaging, and genomic information in a hypothesis-free manner to build models that could identify patients with PAD and predict future mortality. Comparison was made to standard stepwise linear regression models. Our machine-learned models outperformed stepwise logistic regression models both for the identification of patients with PAD (area under the curve, 0.87 vs 0.76, respectively; P = .03) and for the prediction of future mortality (area under the curve, 0.76 vs 0.65, respectively; P = .10). Both machine-learned models were markedly better calibrated than the stepwise logistic regression models, thus providing more accurate disease and mortality risk estimates. Machine learning approaches can produce more accurate disease classification and prediction models. These tools may prove clinically useful for the automated identification of patients with highly morbid diseases for which aggressive risk factor management can improve outcomes. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  16. Bayesian nonparametric regression with varying residual density

    PubMed Central

    Pati, Debdeep; Dunson, David B.

    2013-01-01

    We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053

  17. Hazard Regression Models of Early Mortality in Trauma Centers

    PubMed Central

    Clark, David E; Qian, Jing; Winchell, Robert J; Betensky, Rebecca A

    2013-01-01

    Background Factors affecting early hospital deaths after trauma may be different from factors affecting later hospital deaths, and the distribution of short and long prehospital times may vary among hospitals. Hazard regression (HR) models may therefore be more useful than logistic regression (LR) models for analysis of trauma mortality, especially when treatment effects at different time points are of interest. Study Design We obtained data for trauma center patients from the 2008–9 National Trauma Data Bank (NTDB). Cases were included if they had complete data for prehospital times, hospital times, survival outcome, age, vital signs, and severity scores. Cases were excluded if pulseless on admission, transferred in or out, or ISS<9. Using covariates proposed for the Trauma Quality Improvement Program and an indicator for each hospital, we compared LR models predicting survival at 8 hours after injury to HR models with survival censored at 8 hours. HR models were then modified to allow time-varying hospital effects. Results 85,327 patients in 161 hospitals met inclusion criteria. Crude hazards peaked initially, then steadily declined. When hazard ratios were assumed constant in HR models, they were similar to odds ratios in LR models associating increased mortality with increased age, firearm mechanism, increased severity, more deranged physiology, and estimated hospital-specific effects. However, when hospital effects were allowed to vary by time, HR models demonstrated that hospital outliers were not the same at different times after injury. Conclusions HR models with time-varying hazard ratios reveal inconsistencies in treatment effects, data quality, and/or timing of early death among trauma centers. HR models are generally more flexible than LR models, can be adapted for censored data, and potentially offer a better tool for analysis of factors affecting early death after injury. PMID:23036828

  18. Monthly streamflow forecasting in the Rhine basin

    NASA Astrophysics Data System (ADS)

    Schick, Simon; Rössler, Ole; Weingartner, Rolf

    2017-04-01

    Forecasting seasonal streamflow of the Rhine river is of societal relevance as the Rhine is an important water way and water resource in Western Europe. The present study investigates the predictability of monthly mean streamflow at lead times of zero, one, and two months with the focus on potential benefits by the integration of seasonal climate predictions. Specifically, we use seasonal predictions of precipitation and surface air temperature released by the European Centre for Medium-Range Weather Forecasts (ECMWF) for a regression analysis. In order to disentangle forecast uncertainty, the 'Reverse Ensemble Streamflow Prediction' framework is adapted here to the context of regression: By using appropriate subsets of predictors the regression model is constrained to either the initial conditions, the meteorological forcing, or both. An operational application is mimicked by equipping the model with the seasonal climate predictions provided by ECMWF. Finally, to mitigate the spatial aggregation of the meteorological fields the model is also applied at the subcatchment scale, and the resulting predictions are combined afterwards. The hindcast experiment is carried out for the period 1982-2011 in cross validation mode at two gauging stations, namely the Rhine at Lobith and Basel. The results show that monthly forecasts are skillful with respect to climatology only at zero lead time. In addition, at zero lead time the integration of seasonal climate predictions decreases the mean absolute error by 5 to 10 percentage compared to forecasts which are solely based on initial conditions. This reduction most likely is induced by the seasonal prediction of precipitation and not air temperature. The study is completed by bench marking the regression model with runoff simulations from ECMWFs seasonal forecast system. By simply using basin averages followed by a linear bias correction, these runoff simulations translate well to monthly streamflow. Though the regression model is only slightly outperformed, we argue that runoff out of the land surface component of seasonal climate forecasting systems is an interesting option when it comes to seasonal streamflow forecasting in large river basins.

  19. Investigating the adaptive model of thermal comfort for naturally ventilated school buildings in Taiwan.

    PubMed

    Hwang, Ruey-Lung; Lin, Tzu-Ping; Chen, Chen-Peng; Kuo, Nai-Jung

    2009-03-01

    Divergence in the acceptability to people in different regions of naturally ventilated thermal environments raises a concern over the extent to which the ASHRAE Standard 55 may be applied as a universal criterion of thermal comfort. In this study, the ASHRAE 55 adaptive model of thermal comfort was investigated for its applicability to a hot and humid climate through a long-term field survey performed in central Taiwan among local students attending 14 elementary and high schools during September to January. Adaptive behaviors, thermal neutrality, and thermal comfort zones are explored. A probit analysis of thermal acceptability responses from students was performed in place of the conventional linear regression of thermal sensation votes against operative temperature to investigate the limits of comfort zones for 90% and 80% acceptability; the corresponding comfort zones were found to occur at 20.1-28.4 degrees C and 17.6-30.0 degrees C, respectively. In comparison with the yearly comfort zones recommended by the adaptive model for naturally ventilated spaces in the ASHRAE Standard 55, those observed in this study differ in the lower limit for 80% acceptability, with the observed level being 1.7 degrees C lower than the ASHRAE-recommended value. These findings can be generalized to the population of school children, thus providing information that can supplement ASHRAE Standard 55 in evaluating the thermal performance of naturally ventilated school buildings, particularly in hot-humid areas such as Taiwan.

  20. Investigating the adaptive model of thermal comfort for naturally ventilated school buildings in Taiwan

    NASA Astrophysics Data System (ADS)

    Hwang, Ruey-Lung; Lin, Tzu-Ping; Chen, Chen-Peng; Kuo, Nai-Jung

    2009-03-01

    Divergence in the acceptability to people in different regions of naturally ventilated thermal environments raises a concern over the extent to which the ASHRAE Standard 55 may be applied as a universal criterion of thermal comfort. In this study, the ASHRAE 55 adaptive model of thermal comfort was investigated for its applicability to a hot and humid climate through a long-term field survey performed in central Taiwan among local students attending 14 elementary and high schools during September to January. Adaptive behaviors, thermal neutrality, and thermal comfort zones are explored. A probit analysis of thermal acceptability responses from students was performed in place of the conventional linear regression of thermal sensation votes against operative temperature to investigate the limits of comfort zones for 90% and 80% acceptability; the corresponding comfort zones were found to occur at 20.1-28.4°C and 17.6-30.0°C, respectively. In comparison with the yearly comfort zones recommended by the adaptive model for naturally ventilated spaces in the ASHRAE Standard 55, those observed in this study differ in the lower limit for 80% acceptability, with the observed level being 1.7°C lower than the ASHRAE-recommended value. These findings can be generalized to the population of school children, thus providing information that can supplement ASHRAE Standard 55 in evaluating the thermal performance of naturally ventilated school buildings, particularly in hot-humid areas such as Taiwan.

  1. Regression rate study of porous axial-injection, endburning hybrid fuel grains

    NASA Astrophysics Data System (ADS)

    Hitt, Matthew A.

    This experimental and theoretical work examines the effects of gaseous oxidizer flow rates and pressure on the regression rates of porous fuels for hybrid rocket applications. Testing was conducted using polyethylene as the porous fuel and both gaseous oxygen and nitrous oxide as the oxidizer. Nominal test articles were tested using 200, 100, 50, and 15 micron fuel pore sizes. Pressures tested ranged from atmospheric to 1160 kPa for the gaseous oxygen tests and from 207 kPa to 1054 kPa for the nitrous oxide tests, and oxidizer injection velocities ranged from 35 m/s to 80 m/s for the gaseous oxygen tests and from 7.5 m/s to 16.8 m/s for the nitrous oxide tests. Regression rates were determined using pretest and posttest length measurements of the solid fuel. Experimental results demonstrated that the regression rate of the porous axial-injection, end-burning hybrid was a function of the chamber pressure, as opposed to the oxidizer mass flux typical in conventional hybrids. Regression rates ranged from approximately 0.75 mm/s at atmospheric pressure to 8.89 mm/s at 1160 kPa for the gaseous oxygen tests and 0.21 mm/s at 207 kPa to 1.44 mm/s at 1054 kPa for the nitrous oxide tests. The analytical model was developed based on a standard ablative model modified to include oxidizer flow through the grain. The heat transfer from the flame was primarily modeled using an empirically determined flame coefficient that included all heat transfer mechanisms in one term. An exploratory flame model based on the Granular Diffusion Flame model used for solid rocket motors was also adapted for comparison with the empirical flame coefficient. This model was then evaluated quantitatively using the experimental results of the gaseous oxygen tests as well as qualitatively using the experimental results of the nitrous oxide tests. The model showed agreement with the experimental results indicating it has potential for giving insight into the flame structure in this motor configuration. Results from the model suggested that both kinetic and diffusion processes could be relevant to the combustion depending on the chamber pressure.

  2. Wind adaptive modeling of transmission lines using minimum description length

    NASA Astrophysics Data System (ADS)

    Jaw, Yoonseok; Sohn, Gunho

    2017-03-01

    The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.

  3. Adaptive local linear regression with application to printer color management.

    PubMed

    Gupta, Maya R; Garcia, Eric K; Chin, Erika

    2008-06-01

    Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global "optimal" value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing k-NN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in error metrics are shown, indicating that enclosing neighborhoods may be a promising adaptive neighborhood definition for other local learning tasks as well, depending on the density of training samples.

  4. Tree-ring growth of Scots pine, Common beech and Pedunculate oak under future climate in northeastern Germany

    NASA Astrophysics Data System (ADS)

    Jurasinski, Gerald; Scharnweber, Tobias; Schröder, Christian; Lennartz, Bernd; Bauwe, Andreas

    2017-04-01

    Tree growth depends, among other factors, largely on the prevailing climatic conditions. Therefore, tree growth patterns are to be expected under climate change. Here, we analyze the tree-ring growth response of three major European tree species to projected future climate across a climatic (mostly precipitation) gradient in northeastern Germany. We used monthly data for temperature, precipitation, and the standardized precipitation evapotranspiration index (SPEI) over multiple time scales (1, 3, 6, 12, and 24 months) to construct models of tree-ring growth for Scots pine (Pinus syl- vestris L.) at three pure stands, and for Common beech (Fagus sylvatica L.) and Pedunculate oak (Quercus robur L.) at three mature mixed stands. The regression models were derived using a two-step approach based on partial least squares regression (PLSR) to extract potentially well explaining variables followed by ordinary least squares regression (OLSR) to consolidate the models to the least number of variables while retaining high explanatory power. The stability of the models was tested with a comprehensive calibration-verification scheme. All models were successfully verified with R2s ranging from 0.21 for the western pine stand to 0.62 for the beech stand in the east. For growth prediction, climate data forecasted until 2100 by the regional climate model WETTREG2010 based on the A1B Intergovernmental Panel on Climate Change (IPCC) emission scenario was used. For beech and oak, growth rates will likely decrease until the end of the 21st century. For pine, modeled growth trends vary and range from a slight growth increase to a weak decrease in growth rates depending on the position along the climatic gradient. The climatic gradient across the study area will possibly affect the future growth of oak with larger growth reductions towards the drier east. For beech, site-specific adaptations seem to override the influence of the climatic gradient. We conclude that in Northeastern Germany Scots pine has great potential to remain resilient to projected climate change without any greater impairment, whereas Common beech and Pedunculate oak will likely face lesser growth under the expected warmer and dryer climate conditions. The results call for an adaptation of forest management to mitigate the negative effects of climate change for beech and oak in the region.

  5. A Novel approach to monitor chlorophyll-a concentration using an adaptive model from MODIS data at 250 metres spatial resolution

    NASA Astrophysics Data System (ADS)

    El Alem, A.; Chokmani, K.; Laurion, I.; El Adlouni, S.

    2013-12-01

    Occurrence and extent of Harmful Algal Bloom (HAB) has increased in inland water bodies around the world. The appearance of these blooms reflects the advanced state of eutrophication of several aquatic systems caused by urban, agricultural, and industrial development. Algal blooms, especially those cyanobacterial origins, are capable to produce and release toxins, threatening human and animal health, quality of drinking water, and recreational water bodies. Conventional monitoring networks, based on infrequent sampling in a few fixed monitoring stations, cannot provide the information needed as HABs are spatially and temporally heterogeneous. Remote sensing represents an interesting alternative to provide the required spatial and temporal coverage. The usefulness of air-borne and satellite remote sensing data to detect HABs was demonstrated since three decades ago, and since several empirical and semi-empirical models, using satellite imagery, were developed to estimate chlorophyll-a concentration [Chl-a] as a proxy to detect bloom proliferations. However, most of those models presented several weaknesses that are generally linked to the range of [Chl-a] to be estimated. Indeed, models originally calibrated for high [Chl-a] fail to estimate low concentrations and vice versa. In this study, an adaptive model to estimate [Chl-a], spread over a wide range of concentrations, is developed for optically complex inland water bodies based on combination of water spectral response classification and three developed semi-empirical algorithms using a multivariate regression. Three distinct water types (low, medium, and high [Chl-a]) are first identified using the Classification and Regression Tree (CART) method performed on remote sensing reflectance over a dataset of 44 [Chl-a] samples collected from Lakes over Quebec province. Based on the water classification, a specific multivariate model to each water type is developed using the same dataset and the MODIS data at 250-m spatial resolution. By pre-clustering inland water bodies, the results were very interesting as the determination coefficients as well as the relative RMSE of the cross-validation were of 0.99, 0.98 and 0.95 and of 0.5%, 8% and 17% for high, medium, and low [Chl-a], respectively. On the other hand, the adaptive model reached a global success rate of 92% using an independent, semi-qualitative, [Chl-a] samples collected over more than twenty inland water bodies for the years 2009 and 2010 over the Quebec province.

  6. Adaptive Control of Exoskeleton Robots for Periodic Assistive Behaviours Based on EMG Feedback Minimisation.

    PubMed

    Peternel, Luka; Noda, Tomoyuki; Petrič, Tadej; Ude, Aleš; Morimoto, Jun; Babič, Jan

    2016-01-01

    In this paper we propose an exoskeleton control method for adaptive learning of assistive joint torque profiles in periodic tasks. We use human muscle activity as feedback to adapt the assistive joint torque behaviour in a way that the muscle activity is minimised. The user can then relax while the exoskeleton takes over the task execution. If the task is altered and the existing assistive behaviour becomes inadequate, the exoskeleton gradually adapts to the new task execution so that the increased muscle activity caused by the new desired task can be reduced. The advantage of the proposed method is that it does not require biomechanical or dynamical models. Our proposed learning system uses Dynamical Movement Primitives (DMPs) as a trajectory generator and parameters of DMPs are modulated using Locally Weighted Regression. Then, the learning system is combined with adaptive oscillators that determine the phase and frequency of motion according to measured Electromyography (EMG) signals. We tested the method with real robot experiments where subjects wearing an elbow exoskeleton had to move an object of an unknown mass according to a predefined reference motion. We further evaluated the proposed approach on a whole-arm exoskeleton to show that it is able to adaptively derive assistive torques even for multiple-joint motion.

  7. Adaptive Control of Exoskeleton Robots for Periodic Assistive Behaviours Based on EMG Feedback Minimisation

    PubMed Central

    Peternel, Luka; Noda, Tomoyuki; Petrič, Tadej; Ude, Aleš; Morimoto, Jun; Babič, Jan

    2016-01-01

    In this paper we propose an exoskeleton control method for adaptive learning of assistive joint torque profiles in periodic tasks. We use human muscle activity as feedback to adapt the assistive joint torque behaviour in a way that the muscle activity is minimised. The user can then relax while the exoskeleton takes over the task execution. If the task is altered and the existing assistive behaviour becomes inadequate, the exoskeleton gradually adapts to the new task execution so that the increased muscle activity caused by the new desired task can be reduced. The advantage of the proposed method is that it does not require biomechanical or dynamical models. Our proposed learning system uses Dynamical Movement Primitives (DMPs) as a trajectory generator and parameters of DMPs are modulated using Locally Weighted Regression. Then, the learning system is combined with adaptive oscillators that determine the phase and frequency of motion according to measured Electromyography (EMG) signals. We tested the method with real robot experiments where subjects wearing an elbow exoskeleton had to move an object of an unknown mass according to a predefined reference motion. We further evaluated the proposed approach on a whole-arm exoskeleton to show that it is able to adaptively derive assistive torques even for multiple-joint motion. PMID:26881743

  8. Integrative eQTL analysis of tumor and host omics data in individuals with bladder cancer.

    PubMed

    Pineda, Silvia; Van Steen, Kristel; Malats, Núria

    2017-09-01

    Integrative analyses of several omics data are emerging. The data are usually generated from the same source material (i.e., tumor sample) representing one level of regulation. However, integrating different regulatory levels (i.e., blood) with those from tumor may also reveal important knowledge about the human genetic architecture. To model this multilevel structure, an integrative-expression quantitative trait loci (eQTL) analysis applying two-stage regression (2SR) was proposed. This approach first regressed tumor gene expression levels with tumor markers and the adjusted residuals from the previous model were then regressed with the germline genotypes measured in blood. Previously, we demonstrated that penalized regression methods in combination with a permutation-based MaxT method (Global-LASSO) is a promising tool to fix some of the challenges that high-throughput omics data analysis imposes. Here, we assessed whether Global-LASSO can also be applied when tumor and blood omics data are integrated. We further compared our strategy with two 2SR-approaches, one using multiple linear regression (2SR-MLR) and other using LASSO (2SR-LASSO). We applied the three models to integrate genomic, epigenomic, and transcriptomic data from tumor tissue with blood germline genotypes from 181 individuals with bladder cancer included in the TCGA Consortium. Global-LASSO provided a larger list of eQTLs than the 2SR methods, identified a previously reported eQTLs in prostate stem cell antigen (PSCA), and provided further clues on the complexity of APBEC3B loci, with a minimal false-positive rate not achieved by 2SR-MLR. It also represents an important contribution for omics integrative analysis because it is easy to apply and adaptable to any type of data. © 2017 WILEY PERIODICALS, INC.

  9. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  10. The use of modelling to evaluate and adapt strategies for animal disease control.

    PubMed

    Saegerman, C; Porter, S R; Humblet, M F

    2011-08-01

    Disease is often associated with debilitating clinical signs, disorders or production losses in animals and/or humans, leading to severe socio-economic repercussions. This explains the high priority that national health authorities and international organisations give to selecting control strategies for and the eradication of specific diseases. When a control strategy is selected and implemented, an effective method of evaluating its efficacy is through modelling. To illustrate the usefulness of models in evaluating control strategies, the authors describe several examples in detail, including three examples of classification and regression tree modelling to evaluate and improve the early detection of disease: West Nile fever in equids, bovine spongiform encephalopathy (BSE) and multifactorial diseases, such as colony collapse disorder (CCD) in the United States. Also examined are regression modelling to evaluate skin test practices and the efficacy of an awareness campaign for bovine tuberculosis (bTB); mechanistic modelling to monitor the progress of a control strategy for BSE; and statistical nationwide modelling to analyse the spatio-temporal dynamics of bTB and search for potential risk factors that could be used to target surveillance measures more effectively. In the accurate application of models, an interdisciplinary rather than a multidisciplinary approach is required, with the fewest assumptions possible.

  11. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers' Adaptations.

    PubMed

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.

  12. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers’ Adaptations

    PubMed Central

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435

  13. The Cross-Validation of the United States Air Force Submaximal Cycle Ergometer Test to Estimate Aerobic Capacity

    DTIC Science & Technology

    1994-06-01

    the University of Florida. When body composition variables were included in the regression model, such as % body fat and fet free mass, as well as the...maximal oxygen intake . JAMA 203:201-210, 1968. 2. Sharp, J.R. The new Air Force fitness test: A field trial assessing effectiveness and safety...more muscle mass and less fat than the female counterpart. However males and females appear to adapt equally to training (53,55). Also men have a larger

  14. Studying Gene and Gene-Environment Effects of Uncommon and Common Variants on Continuous Traits: A Marker-Set Approach Using Gene-Trait Similarity Regression

    PubMed Central

    Tzeng, Jung-Ying; Zhang, Daowen; Pongpanich, Monnat; Smith, Chris; McCarthy, Mark I.; Sale, Michèle M.; Worrall, Bradford B.; Hsu, Fang-Chi; Thomas, Duncan C.; Sullivan, Patrick F.

    2011-01-01

    Genomic association analyses of complex traits demand statistical tools that are capable of detecting small effects of common and rare variants and modeling complex interaction effects and yet are computationally feasible. In this work, we introduce a similarity-based regression method for assessing the main genetic and interaction effects of a group of markers on quantitative traits. The method uses genetic similarity to aggregate information from multiple polymorphic sites and integrates adaptive weights that depend on allele frequencies to accomodate common and uncommon variants. Collapsing information at the similarity level instead of the genotype level avoids canceling signals that have the opposite etiological effects and is applicable to any class of genetic variants without the need for dichotomizing the allele types. To assess gene-trait associations, we regress trait similarities for pairs of unrelated individuals on their genetic similarities and assess association by using a score test whose limiting distribution is derived in this work. The proposed regression framework allows for covariates, has the capacity to model both main and interaction effects, can be applied to a mixture of different polymorphism types, and is computationally efficient. These features make it an ideal tool for evaluating associations between phenotype and marker sets defined by linkage disequilibrium (LD) blocks, genes, or pathways in whole-genome analysis. PMID:21835306

  15. Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk

    PubMed Central

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323

  16. Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk.

    PubMed

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case-control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome.

  17. Feed efficiency of tropically adapted cattle when fed in winter or spring in a temperate location.

    PubMed

    Coleman, S W; Chase, C C; Phillips, W A; Riley, D G

    2018-04-16

    Earlier work has shown that young, tropically adapted cattle do not gain as rapidly as temperately adapted cattle during the winter in Oklahoma. The objective for this study was to determine if efficiency of gains was also impacted in tropically adapted cattle and if efficiency was consistent over different seasons. Over 3 yrs, 240 straightbred and crossbred steers (F1 and three-way crosses) of Angus, Brahman or Romosinuano breeding, born in Brooksville, FL were transported to El Reno, OK in October and fed in two phases to determine performance, individual intake and efficiency. Phase 1 (WIN) began in November after a 28 d recovery from shipping stress and Phase 2 (SS) began in March, 28 d following completion of WIN each year. The diet for WIN was a grower diet (14% CP, 1.10 Mcal NEg/kg) and that for the SS was a feedlot diet (12.8% CP; 1.33 Mcal NEg/kg). After a 14 d adjustment to diet and facilities, intake trials were conducted over a period of 56 to 162 d for determination of intake and gain for efficiency. Body weights were recorded at approximately 14 d intervals, and initial BW, median BW, and ADG were determined from individual animal regressions of BW on days on feed (DOF). Individual daily DMI was then regressed by phase on median BW and ADG, and residuals of regression were recorded as residual feed intake (RFI). Similarly, daily gain was regressed by phase on median BW and DMI, and errors of regression were recorded as residual gain (RADG). Gain to feed (G:F) was also calculated. The statistical model to evaluate ADG, DMI, and efficiency included fixed effects of dam age (3 to 4, 5, 6 to10, and > 10yr), harvest group (3 per year), age on test, and a nested term DT(ST x XB) where DT = proportion tropical breeding of dam (0, 0.5, or 1), ST= proportion tropical breeding of sire (1, or 0), and XB whether the calf was straightbred or crossbred. Year of record, sire(ST x XB) and pen were random effects. Pre-weaning ADG and BW increased (P < 0.05) with level of genetic tropical influence, but during the WIN, ADG and efficiency estimated by G:F and RADG declined (P < 0.05). Tropical influence had little effect on RFI during the WIN, or on most traits during SS. In general during SS, crossbred steers gained faster, and were more efficient by G:F and RADG (P < 0.05) than straightbred steers. Simple correlations, both Pearson and Spearman, between RFI in WIN and RFI in SS were 0.51 (P < 0.001) whereas that for RADG was 0.17 (P < 0.01).

  18. Empirical regression models for estimating nitrogen removal in a stormwater wetland during dry and wet days.

    PubMed

    Guerra, Heidi B; Park, Kisoo; Kim, Youngchul

    2013-01-01

    Due to the highly variable hydrologic quantity and quality of stormwater runoff, which requires more complex models for proper prediction of treatment, a relatively few and site-specific models for stormwater wetlands have been developed. In this study, regression models based on extensive operational data and wastewater wetlands were adapted to a stormwater wetland receiving both base flow and storm flow from an agricultural area. The models were calibrated in Excel Solver using 15 sets of operational data gathered from random sampling during dry days. The calibrated models were then applied to 20 sets of event mean concentration data from composite sampling during 20 independent rainfall events. For dry days, the models estimated effluent concentrations of nitrogen species that were close to the measured values. However, overestimations during wet days were made for NH(3)-N and total Kjeldahl nitrogen, which resulted from higher hydraulic loading rates and influent nitrogen concentrations during storm flows. The results showed that biological nitrification and denitrification was the major nitrogen removal mechanism during dry days. Meanwhile, during wet days, the prevailing aerobic conditions decreased the denitrification capacity of the wetland, and sedimentation of particulate organic nitrogen and particle-associated forms of nitrogen was increased.

  19. Item Response Theory Modeling of the Philadelphia Naming Test.

    PubMed

    Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D

    2015-06-01

    In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.

  20. High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong

    2008-02-01

    Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.

  1. A two-stage predictive model to simultaneous control of trihalomethanes in water treatment plants and distribution systems: adaptability to treatment processes.

    PubMed

    Domínguez-Tello, Antonio; Arias-Borrego, Ana; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2017-10-01

    The trihalomethanes (TTHMs) and others disinfection by-products (DBPs) are formed in drinking water by the reaction of chlorine with organic precursors contained in the source water, in two consecutive and linked stages, that starts at the treatment plant and continues in second stage along the distribution system (DS) by reaction of residual chlorine with organic precursors not removed. Following this approach, this study aimed at developing a two-stage empirical model for predicting the formation of TTHMs in the water treatment plant and subsequently their evolution along the water distribution system (WDS). The aim of the two-stage model was to improve the predictive capability for a wide range of scenarios of water treatments and distribution systems. The two-stage model was developed using multiple regression analysis from a database (January 2007 to July 2012) using three different treatment processes (conventional and advanced) in the water supply system of Aljaraque area (southwest of Spain). Then, the new model was validated using a recent database from the same water supply system (January 2011 to May 2015). The validation results indicated no significant difference in the predictive and observed values of TTHM (R 2 0.874, analytical variance <17%). The new model was applied to three different supply systems with different treatment processes and different characteristics. Acceptable predictions were obtained in the three distribution systems studied, proving the adaptability of the new model to the boundary conditions. Finally the predictive capability of the new model was compared with 17 other models selected from the literature, showing satisfactory results prediction and excellent adaptability to treatment processes.

  2. A hybrid PCA-CART-MARS-based prognostic approach of the remaining useful life for aircraft engines.

    PubMed

    Sánchez Lasheras, Fernando; García Nieto, Paulino José; de Cos Juez, Francisco Javier; Mayo Bayón, Ricardo; González Suárez, Victor Manuel

    2015-03-23

    Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines.

  3. A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines

    PubMed Central

    Lasheras, Fernando Sánchez; Nieto, Paulino José García; de Cos Juez, Francisco Javier; Bayón, Ricardo Mayo; Suárez, Victor Manuel González

    2015-01-01

    Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines. PMID:25806876

  4. Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation.

    PubMed

    Kim, Miran; Song, Yongsoo; Wang, Shuang; Xia, Yuhou; Jiang, Xiaoqian

    2018-04-17

    Learning a model without accessing raw data has been an intriguing idea to security and machine learning researchers for years. In an ideal setting, we want to encrypt sensitive data to store them on a commercial cloud and run certain analyses without ever decrypting the data to preserve privacy. Homomorphic encryption technique is a promising candidate for secure data outsourcing, but it is a very challenging task to support real-world machine learning tasks. Existing frameworks can only handle simplified cases with low-degree polynomials such as linear means classifier and linear discriminative analysis. The goal of this study is to provide a practical support to the mainstream learning models (eg, logistic regression). We adapted a novel homomorphic encryption scheme optimized for real numbers computation. We devised (1) the least squares approximation of the logistic function for accuracy and efficiency (ie, reduce computation cost) and (2) new packing and parallelization techniques. Using real-world datasets, we evaluated the performance of our model and demonstrated its feasibility in speed and memory consumption. For example, it took approximately 116 minutes to obtain the training model from the homomorphically encrypted Edinburgh dataset. In addition, it gives fairly accurate predictions on the testing dataset. We present the first homomorphically encrypted logistic regression outsourcing model based on the critical observation that the precision loss of classification models is sufficiently small so that the decision plan stays still. ©Miran Kim, Yongsoo Song, Shuang Wang, Yuhou Xia, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.04.2018.

  5. Imaging-based biomarkers of cognitive performance in older adults constructed via high-dimensional pattern regression applied to MRI and PET.

    PubMed

    Wang, Ying; Goh, Joshua O; Resnick, Susan M; Davatzikos, Christos

    2013-01-01

    In this study, we used high-dimensional pattern regression methods based on structural (gray and white matter; GM and WM) and functional (positron emission tomography of regional cerebral blood flow; PET) brain data to identify cross-sectional imaging biomarkers of cognitive performance in cognitively normal older adults from the Baltimore Longitudinal Study of Aging (BLSA). We focused on specific components of executive and memory domains known to decline with aging, including manipulation, semantic retrieval, long-term memory (LTM), and short-term memory (STM). For each imaging modality, brain regions associated with each cognitive domain were generated by adaptive regional clustering. A relevance vector machine was adopted to model the nonlinear continuous relationship between brain regions and cognitive performance, with cross-validation to select the most informative brain regions (using recursive feature elimination) as imaging biomarkers and optimize model parameters. Predicted cognitive scores using our regression algorithm based on the resulting brain regions correlated well with actual performance. Also, regression models obtained using combined GM, WM, and PET imaging modalities outperformed models based on single modalities. Imaging biomarkers related to memory performance included the orbito-frontal and medial temporal cortical regions with LTM showing stronger correlation with the temporal lobe than STM. Brain regions predicting executive performance included orbito-frontal, and occipito-temporal areas. The PET modality had higher contribution to most cognitive domains except manipulation, which had higher WM contribution from the superior longitudinal fasciculus and the genu of the corpus callosum. These findings based on machine-learning methods demonstrate the importance of combining structural and functional imaging data in understanding complex cognitive mechanisms and also their potential usage as biomarkers that predict cognitive status.

  6. The importance of trait emotional intelligence and feelings in the prediction of perceived and biological stress in adolescents: hierarchical regressions and fsQCA models.

    PubMed

    Villanueva, Lidón; Montoya-Castilla, Inmaculada; Prado-Gascó, Vicente

    2017-07-01

    The purpose of this study is to analyze the combined effects of trait emotional intelligence (EI) and feelings on healthy adolescents' stress. Identifying the extent to which adolescent stress varies with trait emotional differences and the feelings of adolescents is of considerable interest in the development of intervention programs for fostering youth well-being. To attain this goal, self-reported questionnaires (perceived stress, trait EI, and positive/negative feelings) and biological measures of stress (hair cortisol concentrations, HCC) were collected from 170 adolescents (12-14 years old). Two different methodologies were conducted, which included hierarchical regression models and a fuzzy-set qualitative comparative analysis (fsQCA). The results support trait EI as a protective factor against stress in healthy adolescents and suggest that feelings reinforce this relation. However, the debate continues regarding the possibility of optimal levels of trait EI for effective and adaptive emotional management, particularly in the emotional attention and clarity dimensions and for female adolescents.

  7. Teacher Consultation and Coaching within Mental Health Practice: Classroom and Child Effects in Urban Elementary Schools

    PubMed Central

    Cappella, Elise; Hamre, Bridget K.; Kim, Ha Yeon; Henry, David B.; Frazier, Stacy L.; Atkins, Marc S.; Schoenwald, Sonja K.

    2012-01-01

    Objective To examine effects of a teacher consultation and coaching program delivered by school and community mental health professionals on change in observed classroom interactions and child functioning across one school year. Method Thirty-six classrooms within five urban elementary schools (87% Latino, 11% Black) were randomly assigned to intervention (training + consultation/coaching) and control (training only) conditions. Classroom and child outcomes (n = 364; 43% girls) were assessed in the fall and spring. Results Random effects regression models showed main effects of intervention on teacher-student relationship closeness, academic self-concept, and peer victimization. Results of multiple regression models showed levels of observed teacher emotional support in the fall moderated intervention impact on emotional support at the end of the school year. Conclusions Results suggest teacher consultation and coaching can be integrated within existing mental health activities in urban schools and impact classroom effectiveness and child adaptation across multiple domains. PMID:22428941

  8. Intimate partner violence and anxiety disorders in pregnancy: the importance of vocational training of the nursing staff in facing them1

    PubMed Central

    Fonseca-Machado, Mariana de Oliveira; Monteiro, Juliana Cristina dos Santos; Haas, Vanderlei José; Abrão, Ana Cristina Freitas de Vilhena; Gomes-Sponholz, Flávia

    2015-01-01

    Objective: to identify the relationship between posttraumatic stress disorder, trait and state anxiety, and intimate partner violence during pregnancy. Method: observational, cross-sectional study developed with 358 pregnant women. The Posttraumatic Stress Disorder Checklist - Civilian Version was used, as well as the State-Trait Anxiety Inventory and an adapted version of the instrument used in the World Health Organization Multi-country Study on Women's Health and Domestic Violence. Results: after adjusting to the multiple logistic regression model, intimate partner violence, occurred during pregnancy, was associated with the indication of posttraumatic stress disorder. The adjusted multiple linear regression models showed that the victims of violence, in the current pregnancy, had higher symptom scores of trait and state anxiety than non-victims. Conclusion: recognizing the intimate partner violence as a clinically relevant and identifiable risk factor for the occurrence of anxiety disorders during pregnancy can be a first step in the prevention thereof. PMID:26487135

  9. Improving the Spatial Prediction of Soil Organic Carbon Stocks in a Complex Tropical Mountain Landscape by Methodological Specifications in Machine Learning Approaches.

    PubMed

    Ließ, Mareike; Schmidt, Johannes; Glaser, Bruno

    2016-01-01

    Tropical forests are significant carbon sinks and their soils' carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms-including the model tuning and predictor selection-were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models' predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.

  10. Genomic regression of claw keratin, taste receptor and light-associated genes provides insights into biology and evolutionary origins of snakes.

    PubMed

    Emerling, Christopher A

    2017-10-01

    Regressive evolution of anatomical traits often corresponds with the regression of genomic loci underlying such characters. As such, studying patterns of gene loss can be instrumental in addressing questions of gene function, resolving conflicting results from anatomical studies, and understanding the evolutionary history of clades. The evolutionary origins of snakes involved the regression of a number of anatomical traits, including limbs, taste buds and the visual system, and by analyzing serpent genomes, I was able to test three hypotheses associated with the regression of these features. The first concerns two keratins that are putatively specific to claws. Both genes that encode these keratins are pseudogenized/deleted in snake genomes, providing additional evidence of claw-specificity. The second hypothesis is that snakes lack taste buds, an issue complicated by conflicting results in the literature. I found evidence that different snakes have lost one or more taste receptors, but all snakes examined retained at least one gustatory channel. The final hypothesis addressed is that the earliest snakes were adapted to a dim light niche. I found evidence of deleted and pseudogenized genes with light-associated functions in snakes, demonstrating a pattern of gene loss similar to other dim light-adapted clades. Molecular dating estimates suggest that dim light adaptation preceded the loss of limbs, providing some bearing on interpretations of the ecological origins of snakes. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Adapting the transtheoretical model of change to the bereavement process.

    PubMed

    Calderwood, Kimberly A

    2011-04-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals experience. This theory is unique because it addresses attitudes, intentions, and behavioral processes at each stage; it allows for a focus on a broader range of emotions than just anger and depression; it allows for the recognition of two periods of regression during the bereavement process; and it adds a maintenance stage, which other theories lack. This theory can benefit bereaved individuals directly and through the increased awareness among counselors, family, friends, employers, and society at large. This theory may also be used as a tool for bereavement programs to consider whether they are meeting clients' needs throughout the transformation change bereavement process rather than only focusing on the initial stages characterized by intense emotion.

  12. The relationship of dangerous driving with traffic offenses: A study on an adapted measure of dangerous driving.

    PubMed

    Iliescu, Dragoş; Sârbescu, Paul

    2013-03-01

    Using data from three different samples and more than 1000 participants, the current study examines differences in dangerous driving in terms of age, gender, professional driving, as well as the relationship of dangerous driving with behavioral indicators (mileage) and criteria (traffic offenses). The study uses an adapted (Romanian) version of the Dula Dangerous Driving Index (DDDI, Dula and Ballard, 2003) and also reports data on the psychometric characteristics of this measure. Findings suggest that the Romanian version of the DDDI has sound psychometric properties. Dangerous driving is higher in males and occasional drivers, is not correlated with mileage and is significantly related with speeding as a traffic offense, both self-reported and objectively measured. The utility of predictive models including dangerous driving is not very large: logistic regression models have a significant fit to the data, but their misclassification rate (especially in terms of sensitivity) is unacceptable high. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    NASA Astrophysics Data System (ADS)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  14. Predicting Ascospore Release of Monilinia vaccinii-corymbosi of Blueberry with Machine Learning.

    PubMed

    Harteveld, Dalphy O C; Grant, Michael R; Pscheidt, Jay W; Peever, Tobin L

    2017-11-01

    Mummy berry, caused by Monilinia vaccinii-corymbosi, causes economic losses of highbush blueberry in the U.S. Pacific Northwest (PNW). Apothecia develop from mummified berries overwintering on soil surfaces and produce ascospores that infect tissue emerging from floral and vegetative buds. Disease control currently relies on fungicides applied on a calendar basis rather than inoculum availability. To establish a prediction model for ascospore release, apothecial development was tracked in three fields, one in western Oregon and two in northwestern Washington in 2015 and 2016. Air and soil temperature, precipitation, soil moisture, leaf wetness, relative humidity and solar radiation were monitored using in-field weather stations and Washington State University's AgWeatherNet stations. Four modeling approaches were compared: logistic regression, multivariate adaptive regression splines, artificial neural networks, and random forest. A supervised learning approach was used to train the models on two data sets: training (70%) and testing (30%). The importance of environmental factors was calculated for each model separately. Soil temperature, soil moisture, and solar radiation were identified as the most important factors influencing ascospore release. Random forest models, with 78% accuracy, showed the best performance compared with the other models. Results of this research helps PNW blueberry growers to optimize fungicide use and reduce production costs.

  15. Dynamic modelling of n-of-1 data: powerful and flexible data analytics applied to individualised studies.

    PubMed

    Vieira, Rute; McDonald, Suzanne; Araújo-Soares, Vera; Sniehotta, Falko F; Henderson, Robin

    2017-09-01

    N-of-1 studies are based on repeated observations within an individual or unit over time and are acknowledged as an important research method for generating scientific evidence about the health or behaviour of an individual. Statistical analyses of n-of-1 data require accurate modelling of the outcome while accounting for its distribution, time-related trend and error structures (e.g., autocorrelation) as well as reporting readily usable contextualised effect sizes for decision-making. A number of statistical approaches have been documented but no consensus exists on which method is most appropriate for which type of n-of-1 design. We discuss the statistical considerations for analysing n-of-1 studies and briefly review some currently used methodologies. We describe dynamic regression modelling as a flexible and powerful approach, adaptable to different types of outcomes and capable of dealing with the different challenges inherent to n-of-1 statistical modelling. Dynamic modelling borrows ideas from longitudinal and event history methodologies which explicitly incorporate the role of time and the influence of past on future. We also present an illustrative example of the use of dynamic regression on monitoring physical activity during the retirement transition. Dynamic modelling has the potential to expand researchers' access to robust and user-friendly statistical methods for individualised studies.

  16. Forecasting selected specific age mortality rate of Malaysia by using Lee-Carter model

    NASA Astrophysics Data System (ADS)

    Shukri Kamaruddin, Halim; Ismail, Noriszura

    2018-03-01

    Observing mortality pattern and trend is an important subject for any country to maintain a good social-economy in the next projection years. The declining in mortality trend gives a good impression of what a government has done towards macro citizen in one nation. Selecting a particular mortality model can be a tricky based on the approached method adapting. Lee-Carter model is adapted because of its simplicity and reliability of the outcome results with approach of regression. Implementation of Lee-Carter in finding a fitted model and hence its projection has been used worldwide in most of mortality research in developed countries. This paper studies the mortality pattern of Malaysia in the past by using original model of Lee-Carter (1992) and hence its cross-sectional observation for a single age. The data is indexed by age of death and year of death from 1984 to 2012, in which are supplied by Department of Statistics Malaysia. The results are modelled by using RStudio and the keen analysis will focus on the trend and projection of mortality rate and age specific mortality rate in the future. This paper can be extended to different variants extensions of Lee-Carter or any stochastic mortality tool by using Malaysia mortality experience as a centre of the main issue.

  17. Downscaling Land Surface Temperature in Complex Regions by Using Multiple Scale Factors with Adaptive Thresholds

    PubMed Central

    Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen

    2017-01-01

    Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301

  18. Correlates of Incident Cognitive Impairment in the REasons for Geographic and Racial Differences in Stroke (REGARDS) Study

    PubMed Central

    Gillett, Sarah R.; Thacker, Evan L.; Letter, Abraham J.; McClure, Leslie A.; Wadley, Virginia G.; Unverzagt, Frederick W.; Kissela, Brett M.; Kennedy, Richard E.; Glasser, Stephen P.; Levine, Deborah A.; Cushman, Mary

    2015-01-01

    Objective To identify approximately 500 cases of incident cognitive impairment (ICI) in a large, national sample adapting an existing cognitive test-based case definition and to examine relationships of vascular risk factors with ICI. Method Participants were from the REGARDS study, a national sample of 30,239 African-American and white Americans. Participants included in this analysis had normal cognitive screening and no history of stroke at baseline, and at least one follow-up cognitive assessment with a three test battery (TTB). Regression-based norms were applied to TTB scores to identify cases of ICI. Logistic regression was used to model associations with baseline vascular risk factors. Results We identified 495 participants with ICI out of 17,630 eligible participants. In multivariable modeling, income (OR 1.83 CI 1.27,2.62), stroke belt residence (OR 1.45 CI 1.18,1.78), history of transient ischemic attack (OR 1.90 CI 1.29,2.81), coronary artery disease(OR 1.32 CI 1.02,1.70), diabetes (OR 1.48 CI 1.17,1.87), obesity (OR 1.40 CI 1.05,1.86), and incident stroke (OR 2.73 CI 1.52,4.90) were associated with ICI. Conclusions We adapted a previously validated cognitive test-based case definition to identify cases of ICI. Many previously identified risk factors were associated with ICI, supporting the criterion-related validity of our definition. PMID:25978342

  19. A generalized multivariate regression model for modelling ocean wave heights

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  20. Improving the Spatial Prediction of Soil Organic Carbon Stocks in a Complex Tropical Mountain Landscape by Methodological Specifications in Machine Learning Approaches

    PubMed Central

    Schmidt, Johannes; Glaser, Bruno

    2016-01-01

    Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction. PMID:27128736

  1. Breaking the solid ground of common sense: undoing "structure" with Michael Balint.

    PubMed

    Bonomi, Carlo

    2003-09-01

    Balint's great merit was to question what, in the classical perspective, was assumed as a prerequisite for analysis and thus located beyond analysis: the maturity of the ego. A fundamental premise of his work was Ferenczi's distrust for the structural model, which praised the maturity of the ego and its verbal, social, and adaptive abilities. Ferenczi's view of ego maturation as a trauma derivative was strikingly different from the theories of all other psychoanalytic schools and seems to be responsible for Balint's understanding of regression as a sort of inverted process that enables the undoing of the sheltering structures of the mature mind. Balint's understanding of the relation between mature ego and regression diverged not only from the ego psychologists, who emphasized the idea of therapeutic alliance, but also from most of the authors who embraced the object-relational view, like Klein (who considered regression a manifestation of the patient's craving for oral gratification), Fairbairn (who gave up the notion of regression), and Guntrip (who viewed regression as a schizoid phenomenon related to the ego weakness). According to Balint, the clinical appearance of a regression would "depend also on the way the regression is recognized, is accepted, and is responded to by the analyst." In this respect, his position was close to Winnicott's reformulation of the therapeutic action. Yet, the work of Balint reflects the persuasion that the progressive fluidification of the solid structure could be enabled only by the analyst's capacity for becoming himself or herself [unsolid].

  2. The use of regression tree analysis for predicting the functional outcome following traumatic spinal cord injury.

    PubMed

    Facchinello, Yann; Beauséjour, Marie; Richard-Denis, Andreane; Thompson, Cynthia; Mac-Thiong, Jean-Marc

    2017-10-25

    Predicting the long-term functional outcome following traumatic spinal cord injury is needed to adapt medical strategies and to plan an optimized rehabilitation. This study investigates the use of regression tree for the development of predictive models based on acute clinical and demographic predictors. This prospective study was performed on 172 patients hospitalized following traumatic spinal cord injury. Functional outcome was quantified using the Spinal Cord Independence Measure collected within the first-year post injury. Age, delay prior to surgery and Injury Severity Score were considered as continuous predictors while energy of injury, trauma mechanisms, neurological level of injury, injury severity, occurrence of early spasticity, urinary tract infection, pressure ulcer and pneumonia were coded as categorical inputs. A simplified model was built using only injury severity, neurological level, energy and age as predictor and was compared to a more complex model considering all 11 predictors mentioned above The models built using 4 and 11 predictors were found to explain 51.4% and 62.3% of the variance of the Spinal Cord Independence Measure total score after validation, respectively. The severity of the neurological deficit at admission was found to be the most important predictor. Other important predictors were the Injury Severity Score, age, neurological level and delay prior to surgery. Regression trees offer promising performances for predicting the functional outcome after a traumatic spinal cord injury. It could help to determine the number and type of predictors leading to a prediction model of the functional outcome that can be used clinically in the future.

  3. Revisiting Regression in Autism: Heller's "Dementia Infantilis"

    ERIC Educational Resources Information Center

    Westphal, Alexander; Schelinski, Stefanie; Volkmar, Fred; Pelphrey, Kevin

    2013-01-01

    Theodor Heller first described a severe regression of adaptive function in normally developing children, something he termed dementia infantilis, over one 100 years ago. Dementia infantilis is most closely related to the modern diagnosis, childhood disintegrative disorder. We translate Heller's paper, Uber Dementia Infantilis, and discuss…

  4. On neural networks in identification and control of dynamic systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Hyland, David C.

    1993-01-01

    This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.

  5. Aerial robot intelligent control method based on back-stepping

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Xue, Qian

    2018-05-01

    The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.

  6. Stress, Adaptive Coping, and Life Satisfaction

    ERIC Educational Resources Information Center

    Buser, Juleen K.; Kearney, Anne

    2017-01-01

    The authors examined the relationship between stress, adaptive coping, and life satisfaction among college students who reported having a friend or family member with eating disorder symptomatology. A hierarchical regression confirmed the study's hypotheses. Higher stress was linked with less life satisfaction. After stress was controlled, plan…

  7. Study relationship between inorganic and organic coal analysis with gross calorific value by multiple regression and ANFIS

    USGS Publications Warehouse

    Chelgani, S.C.; Hart, B.; Grady, W.C.; Hower, J.C.

    2011-01-01

    The relationship between maceral content plus mineral matter and gross calorific value (GCV) for a wide range of West Virginia coal samples (from 6518 to 15330 BTU/lb; 15.16 to 35.66MJ/kg) has been investigated by multivariable regression and adaptive neuro-fuzzy inference system (ANFIS). The stepwise least square mathematical method comparison between liptinite, vitrinite, plus mineral matter as input data sets with measured GCV reported a nonlinear correlation coefficient (R2) of 0.83. Using the same data set the correlation between the predicted GCV from the ANFIS model and the actual GCV reported a R2 value of 0.96. It was determined that the GCV-based prediction methods, as used in this article, can provide a reasonable estimation of GCV. Copyright ?? Taylor & Francis Group, LLC.

  8. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-07-01

    Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.

  9. Adaptation of clinical prediction models for application in local settings.

    PubMed

    Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M

    2012-01-01

    When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.

  10. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  11. Information Behavior and HIV Testing Intentions Among Young Men at Risk for HIV/AIDS

    PubMed Central

    Meadowbrooke, Chrysta C.; Veinot, Tiffany C.; Loveluck, Jimena; Hickok, Andrew; Bauermeister, José A.

    2014-01-01

    Health research shows that knowing about health risks may not translate into behavior change. However, such research typically operationalizes health information acquisition with knowledge tests. Information scientists who investigate socially embedded information behaviors could help improve understanding of potential associations between information behavior—as opposed to knowledge—and health behavior formation, thus providing new opportunities to investigate the effects of health information. We examine the associations between information behavior and HIV testing intentions among young men who have sex with men (YMSM), a group with high rates of unrecognized HIV infection. We used the theory of planned behavior (TPB) to predict intentions to seek HIV testing in an online sample of 163 YMSM. Multiple regression and recursive path analysis were used to test two models: (a) the basic TPB model and (b) an adapted model that added the direct effects of three information behaviors (information exposure, use of information to make HIV-testing decisions, prior experience obtaining an HIV test) plus self-rated HIV knowledge. As hypothesized, our adapted model improved predictions, explaining more than twice as much variance as the original TPB model. The results suggest that information behaviors may be more important predictors of health behavior intentions than previously acknowledged. PMID:25346934

  12. Practical application of cure mixture model for long-term censored survivor data from a withdrawal clinical trial of patients with major depressive disorder.

    PubMed

    Arano, Ichiro; Sugimoto, Tomoyuki; Hamasaki, Toshimitsu; Ohno, Yuko

    2010-04-23

    Survival analysis methods such as the Kaplan-Meier method, log-rank test, and Cox proportional hazards regression (Cox regression) are commonly used to analyze data from randomized withdrawal studies in patients with major depressive disorder. However, unfortunately, such common methods may be inappropriate when a long-term censored relapse-free time appears in data as the methods assume that if complete follow-up were possible for all individuals, each would eventually experience the event of interest. In this paper, to analyse data including such a long-term censored relapse-free time, we discuss a semi-parametric cure regression (Cox cure regression), which combines a logistic formulation for the probability of occurrence of an event with a Cox proportional hazards specification for the time of occurrence of the event. In specifying the treatment's effect on disease-free survival, we consider the fraction of long-term survivors and the risks associated with a relapse of the disease. In addition, we develop a tree-based method for the time to event data to identify groups of patients with differing prognoses (cure survival CART). Although analysis methods typically adapt the log-rank statistic for recursive partitioning procedures, the method applied here used a likelihood ratio (LR) test statistic from a fitting of cure survival regression assuming exponential and Weibull distributions for the latency time of relapse. The method is illustrated using data from a sertraline randomized withdrawal study in patients with major depressive disorder. We concluded that Cox cure regression reveals facts on who may be cured, and how the treatment and other factors effect on the cured incidence and on the relapse time of uncured patients, and that cure survival CART output provides easily understandable and interpretable information, useful both in identifying groups of patients with differing prognoses and in utilizing Cox cure regression models leading to meaningful interpretations.

  13. The Impact of Life Events on Job Satisfaction

    ERIC Educational Resources Information Center

    Georgellis, Yannis; Lange, Thomas; Tabvuma, Vurain

    2012-01-01

    Employing fixed effects regression techniques on longitudinal data, we investigate how life events affect employees' job satisfaction. Unlike previous work-life research, exploring mostly contemporaneous correlations, we look for evidence of adaptation in the years following major life events. We find evidence of adaptation following the first…

  14. Data mining: Potential applications in research on nutrition and health.

    PubMed

    Batterham, Marijka; Neale, Elizabeth; Martin, Allison; Tapsell, Linda

    2017-02-01

    Data mining enables further insights from nutrition-related research, but caution is required. The aim of this analysis was to demonstrate and compare the utility of data mining methods in classifying a categorical outcome derived from a nutrition-related intervention. Baseline data (23 variables, 8 categorical) on participants (n = 295) in an intervention trial were used to classify participants in terms of meeting the criteria of achieving 10 000 steps per day. Results from classification and regression trees (CARTs), random forests, adaptive boosting, logistic regression, support vector machines and neural networks were compared using area under the curve (AUC) and error assessments. The CART produced the best model when considering the AUC (0.703), overall error (18%) and within class error (28%). Logistic regression also performed reasonably well compared to the other models (AUC 0.675, overall error 23%, within class error 36%). All the methods gave different rankings of variables' importance. CART found that body fat, quality of life using the SF-12 Physical Component Summary (PCS) and the cholesterol: HDL ratio were the most important predictors of meeting the 10 000 steps criteria, while logistic regression showed the SF-12PCS, glucose levels and level of education to be the most significant predictors (P ≤ 0.01). Differing outcomes suggest caution is required with a single data mining method, particularly in a dataset with nonlinear relationships and outliers and when exploring relationships that were not the primary outcomes of the research. © 2017 Dietitians Association of Australia.

  15. Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN

    DOE PAGES

    Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl

    2017-06-09

    SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.

  16. Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl

    SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.

  17. EM Adaptive LASSO—A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes

    PubMed Central

    Mallick, Himel; Tiwari, Hemant K.

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062

  18. EM Adaptive LASSO-A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes.

    PubMed

    Mallick, Himel; Tiwari, Hemant K

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.

  19. The role of social capital and community belongingness for exercise adherence: An exploratory study of the CrossFit gym model.

    PubMed

    Whiteman-Sandland, Jessica; Hawkins, Jemma; Clayton, Debbie

    2016-08-01

    This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.

  20. Artificial intelligence models for predicting iron deficiency anemia and iron serum level based on accessible laboratory data.

    PubMed

    Azarkhish, Iman; Raoufy, Mohammad Reza; Gharibzadeh, Shahriar

    2012-06-01

    Iron deficiency anemia (IDA) is the most common nutritional deficiency worldwide. Measuring serum iron is time consuming, expensive and not available in most hospitals. In this study, based on four accessible laboratory data (MCV, MCH, MCHC, Hb/RBC), we developed an artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) to diagnose the IDA and to predict serum iron level. Our results represent that the neural network analysis is superior to ANFIS and logistic regression models in diagnosing IDA. Moreover, the results show that the ANN is likely to provide an accurate test for predicting serum iron levels with high accuracy and acceptable precision.

  1. The Role of Spirituality in Coping with Visual Impairment

    ERIC Educational Resources Information Center

    Yampolsky, Maya A.; Wittich, Walter; Webb, Gail; Overbury, Olga

    2008-01-01

    Spirituality and coping behaviors were measured in 85 individuals with visual impairments aged 23 to 97. A regression analysis indicated that the religious well-being subscale of the Spiritual Well-Being Scale is a significant predictor of adaptive coping behaviors, indicating that higher religious well-being facilitates adaptive coping. (Contains…

  2. Family Influences on Racial Identity among African American Youth

    ERIC Educational Resources Information Center

    Townsend, Tiffany; Lanphier, Erin

    2007-01-01

    The purpose of this study was to examine the influence of parental efficacy, family coping, and adaptive family functioning on the development of racial identity among African American youth. Fifty-two African American parent-child dyads were participants. Results of a hierarchical regression revealed family adaptability and family cognitive…

  3. Predicting Adaptive Functioning of Mentally Retarded Persons in Community Settings.

    ERIC Educational Resources Information Center

    Hull, John T.; Thompson, Joy C.

    1980-01-01

    The impact of a variety of individual, residential, and community variables on adaptive functioning of 369 retarded persons (18 to 73 years old) was examined using a multiple regression analysis. Individual characteristics (especially IQ) accounted for 21 percent of the variance, while environmental variables, primarily those related to…

  4. Psychological Adaptation, Marital Satisfaction, and Academic Self-Efficacy of International Students

    ERIC Educational Resources Information Center

    Bulgan, Gökçe; Çiftçi, Ayse

    2017-01-01

    The authors investigated marital satisfaction and academic self-efficacy in relation to psychological adaptation (i.e., psychological well-being, life satisfaction) in a sample of 198 married international students. Results of multiple regression analyses indicated that marital satisfaction and academic self-efficacy accounted for 45.9% of…

  5. Statistical-learning strategies generate only modestly performing predictive models for urinary symptoms following external beam radiotherapy of the prostate: A comparison of conventional and machine-learning methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yahya, Noorazrul, E-mail: noorazrul.yahya@research.uwa.edu.au; Ebert, Martin A.; Bulsara, Max

    Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥more » 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions: Logistic regression and MARS were most likely to be the best-performing strategy for the prediction of urinary symptoms with elastic-net and random forest producing competitive results. The predictive power of the models was modest and endpoint-dependent. New features, including spatial dose maps, may be necessary to achieve better models.« less

  6. A theoretical model to describe progressions and regressions for exercise rehabilitation.

    PubMed

    Blanchard, Sam; Glasgow, Phil

    2014-08-01

    This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. What actually confers adaptive capacity? Insights from agro-climatic vulnerability of Australian wheat.

    PubMed

    Bryan, Brett A; Huai, Jianjun; Connor, Jeff; Gao, Lei; King, Darran; Kandulu, John; Zhao, Gang

    2015-01-01

    Vulnerability assessments have often invoked sustainable livelihoods theory to support the quantification of adaptive capacity based on the availability of capital--social, human, physical, natural, and financial. However, the assumption that increased availability of these capitals confers greater adaptive capacity remains largely untested. We quantified the relationship between commonly used capital indicators and an empirical index of adaptive capacity (ACI) in the context of vulnerability of Australian wheat production to climate variability and change. We calculated ACI by comparing actual yields from farm survey data to climate-driven expected yields estimated by a crop model for 12 regions in Australia's wheat-sheep zone from 1991-2010. We then compiled data for 24 typical indicators used in vulnerability analyses, spanning the five capitals. We analyzed the ACI and used regression techniques to identify related capital indicators. Between regions, mean ACI was not significantly different but variance over time was. ACI was higher in dry years and lower in wet years suggesting that farm adaptive strategies are geared towards mitigating losses rather than capitalizing on opportunity. Only six of the 24 capital indicators were significantly related to adaptive capacity in a way predicted by theory. Another four indicators were significantly related to adaptive capacity but of the opposite sign, countering our theory-driven expectation. We conclude that the deductive, theory-based use of capitals to define adaptive capacity and vulnerability should be more circumspect. Assessments need to be more evidence-based, first testing the relevance and influence of capital metrics on adaptive capacity for the specific system of interest. This will more effectively direct policy and targeting of investment to mitigate agro-climatic vulnerability.

  8. What Actually Confers Adaptive Capacity? Insights from Agro-Climatic Vulnerability of Australian Wheat

    PubMed Central

    Bryan, Brett A.; Huai, Jianjun; Connor, Jeff; Gao, Lei; King, Darran; Kandulu, John; Zhao, Gang

    2015-01-01

    Vulnerability assessments have often invoked sustainable livelihoods theory to support the quantification of adaptive capacity based on the availability of capital—social, human, physical, natural, and financial. However, the assumption that increased availability of these capitals confers greater adaptive capacity remains largely untested. We quantified the relationship between commonly used capital indicators and an empirical index of adaptive capacity (ACI) in the context of vulnerability of Australian wheat production to climate variability and change. We calculated ACI by comparing actual yields from farm survey data to climate-driven expected yields estimated by a crop model for 12 regions in Australia’s wheat-sheep zone from 1991–2010. We then compiled data for 24 typical indicators used in vulnerability analyses, spanning the five capitals. We analyzed the ACI and used regression techniques to identify related capital indicators. Between regions, mean ACI was not significantly different but variance over time was. ACI was higher in dry years and lower in wet years suggesting that farm adaptive strategies are geared towards mitigating losses rather than capitalizing on opportunity. Only six of the 24 capital indicators were significantly related to adaptive capacity in a way predicted by theory. Another four indicators were significantly related to adaptive capacity but of the opposite sign, countering our theory-driven expectation. We conclude that the deductive, theory-based use of capitals to define adaptive capacity and vulnerability should be more circumspect. Assessments need to be more evidence-based, first testing the relevance and influence of capital metrics on adaptive capacity for the specific system of interest. This will more effectively direct policy and targeting of investment to mitigate agro-climatic vulnerability. PMID:25668192

  9. [Factors associated with physical activity among Chinese immigrant women].

    PubMed

    Cho, Sung-Hye; Lee, Hyeonkyeong

    2013-12-01

    This study was done to assess the level of physical activity among Chinese immigrant women and to determine the relationships of physical activity with individual characteristics and behavior-specific cognition. A cross-sectional descriptive study was conducted with 161 Chinese immigrant women living in Busan. A health promotion model of physical activity adapted from Pender's Health Promotion Model was used. Self-administered questionnaires were used to collect data during the period from September 25 to November 20, 2012. Using SPSS 18.0 program, descriptive statistics, t-test, analysis of variance, correlation analysis, and multiple regression analysis were done. The average level of physical activity of the Chinese immigrant women was 1,050.06 ± 686.47 MET-min/week and the minimum activity among types of physical activity was most dominant (59.6%). As a result of multiple regression analysis, it was confirmed that self-efficacy and acculturation were statistically significant variables in the model (p<.001), with an explanatory power of 23.7%. The results indicate that the development and application of intervention strategies to increase acculturation and self-efficacy for immigrant women will aid in increasing the physical activity in Chinese immigrant women.

  10. Evaluation of genotype x environment interactions in cotton using the method proposed by Eberhart and Russell and reaction norm models.

    PubMed

    Alves, R S; Teodoro, P E; Farias, F C; Farias, F J C; Carvalho, L P; Rodrigues, J I S; Bhering, L L; Resende, M D V

    2017-08-17

    Cotton produces one of the most important textile fibers of the world and has great relevance in the world economy. It is an economically important crop in Brazil, which is the world's fifth largest producer. However, studies evaluating the genotype x environment (G x E) interactions in cotton are scarce in this country. Therefore, the goal of this study was to evaluate the G x E interactions in two important traits in cotton (fiber yield and fiber length) using the method proposed by Eberhart and Russell (simple linear regression) and reaction norm models (random regression). Eight trials with sixteen upland cotton genotypes, conducted in a randomized block design, were used. It was possible to identify a genotype with wide adaptability and stability for both traits. Reaction norm models have excellent theoretical and practical properties and led to more informative and accurate results than the method proposed by Eberhart and Russell and should, therefore, be preferred. Curves of genotypic values as a function of the environmental gradient, which predict the behavior of the genotypes along the environmental gradient, were generated. These curves make possible the recommendation to untested environmental levels.

  11. Longitudinal social cognitive influences on physical activity and sedentary time in Hispanic breast cancer survivors

    PubMed Central

    Mama, Scherezade K.; Song, Jaejoon; Ortiz, Alexis; Tirado-Gomez, Maribel; Palacios, Cristina; Hughes, Daniel C.; Basen-Engquist, Karen

    2016-01-01

    Objective This study evaluated the effect of two home-based exercise interventions (one culturally-adapted and one standard) on changes in Social Cognitive Theory (SCT) variables, physical activity (PA) and sedentary time (ST), and to determine the association between changes in SCT variables and changes in PA and ST in Hispanic breast cancer survivors. Method Project VIVA! was a 16-week randomized controlled pilot study to test the effectiveness and feasibility of a culturally-adapted exercise intervention for Mexican American and Puerto Rican breast cancer survivors in Houston, Texas and San Juan, Puerto Rico, respectively. Women (N=89) completed questionnaires on SCT variables, PA and ST and were then randomized to a 16-week culturally-adapted exercise program, a non-culturally adapted standard exercise intervention or a wait-list control group. Multiple regression models were used to determine associations between changes in SCT variables and changes in PA and ST. Results Participants were in their late 50s (58.5 ± 9.2 years) and obese (31.0 ± 6.5 kg/m2). Women reported doing roughly 34.5 minutes/day of PA and spending over 11 hours/day in sedentary activities. Across groups, women reported significant increases in exercise self-efficacy and moderate-intensity, vigorous-intensity, and total physical activity from baseline to follow-up (p<.05). Increased social support from family was associated with increases in vigorous-intensity PA. Increases in social modeling were associated with increases in moderate-intensity and total PA and decreases in ST from baseline to follow-up (p<.05). Conclusions Hispanic cancer survivors benefit from PA interventions that focus on increasing social support from family and friends and social modeling. PMID:26602701

  12. Evaluating Machine Learning Regression Algorithms for Operational Retrieval of Biophysical Parameters: Opportunities for Sentinel

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Rivera, J. P.; Alonso, L.; Guanter, L.; Moreno, J.

    2012-04-01

    ESA’s upcoming satellites Sentinel-2 (S2) and Sentinel-3 (S3) aim to ensure continuity for Landsat 5/7, SPOT- 5, SPOT-Vegetation and Envisat MERIS observations by providing superspectral images of high spatial and temporal resolution. S2 and S3 will deliver near real-time operational products with a high accuracy for land monitoring. This unprecedented data availability leads to an urgent need for developing robust and accurate retrieval methods. Machine learning regression algorithms could be powerful candidates for the estimation of biophysical parameters from satellite reflectance measurements because of their ability to perform adaptive, nonlinear data fitting. By using data from the ESA-led field campaign SPARC (Barrax, Spain), it was recently found [1] that Gaussian processes regression (GPR) outperformed competitive machine learning algorithms such as neural networks, support vector regression) and kernel ridge regression both in terms of accuracy and computational speed. For various Sentinel configurations (S2-10m, S2- 20m, S2-60m and S3-300m) three important biophysical parameters were estimated: leaf chlorophyll content (Chl), leaf area index (LAI) and fractional vegetation cover (FVC). GPR was the only method that reached the 10% precision required by end users in the estimation of Chl. In view of implementing the regressor into operational monitoring applications, here the portability of locally trained GPR models to other images was evaluated. The associated confidence maps proved to be a good indicator for evaluating the robustness of the trained models. Consistent retrievals were obtained across the different images, particularly over agricultural sites. To make the method suitable for operational use, however, the poorer confidences over bare soil areas suggest that the training dataset should be expanded with inputs from various land cover types.

  13. Estimating future burned areas under changing climate in the EU-Mediterranean countries.

    PubMed

    Amatulli, Giuseppe; Camia, Andrea; San-Miguel-Ayanz, Jesús

    2013-04-15

    The impacts of climate change on forest fires have received increased attention in recent years at both continental and local scales. It is widely recognized that weather plays a key role in extreme fire situations. It is therefore of great interest to analyze projected changes in fire danger under climate change scenarios and to assess the consequent impacts of forest fires. In this study we estimated burned areas in the European Mediterranean (EU-Med) countries under past and future climate conditions. Historical (1985-2004) monthly burned areas in EU-Med countries were modeled by using the Canadian Fire Weather Index (CFWI). Monthly averages of the CFWI sub-indices were used as explanatory variables to estimate the monthly burned areas in each of the five most affected countries in Europe using three different modeling approaches (Multiple Linear Regression - MLR, Random Forest - RF, Multivariate Adaptive Regression Splines - MARS). MARS outperformed the other methods. Regression equations and significant coefficients of determination were obtained, although there were noticeable differences from country to country. Climatic conditions at the end of the 21st Century were simulated using results from the runs of the regional climate model HIRHAM in the European project PRUDENCE, considering two IPCC SRES scenarios (A2-B2). The MARS models were applied to both scenarios resulting in projected burned areas in each country and in the EU-Med region. Results showed that significant increases, 66% and 140% of the total burned area, can be expected in the EU-Med region under the A2 and B2 scenarios, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Drought Patterns Forecasting using an Auto-Regressive Logistic Model

    NASA Astrophysics Data System (ADS)

    del Jesus, M.; Sheffield, J.; Méndez Incera, F. J.; Losada, I. J.; Espejo, A.

    2014-12-01

    Drought is characterized by a water deficit that may manifest across a large range of spatial and temporal scales. Drought may create important socio-economic consequences, many times of catastrophic dimensions. A quantifiable definition of drought is elusive because depending on its impacts, consequences and generation mechanism, different water deficit periods may be identified as a drought by virtue of some definitions but not by others. Droughts are linked to the water cycle and, although a climate change signal may not have emerged yet, they are also intimately linked to climate.In this work we develop an auto-regressive logistic model for drought prediction at different temporal scales that makes use of a spatially explicit framework. Our model allows to include covariates, continuous or categorical, to improve the performance of the auto-regressive component.Our approach makes use of dimensionality reduction (principal component analysis) and classification techniques (K-Means and maximum dissimilarity) to simplify the representation of complex climatic patterns, such as sea surface temperature (SST) and sea level pressure (SLP), while including information on their spatial structure, i.e. considering their spatial patterns. This procedure allows us to include in the analysis multivariate representation of complex climatic phenomena, as the El Niño-Southern Oscillation. We also explore the impact of other climate-related variables such as sun spots. The model allows to quantify the uncertainty of the forecasts and can be easily adapted to make predictions under future climatic scenarios. The framework herein presented may be extended to other applications such as flash flood analysis, or risk assessment of natural hazards.

  15. Home Environment and Self-Efficacy Beliefs among Native American, African American, and Latino Adolescents.

    PubMed

    Bradley, Robert H

    2018-05-07

    Context helps determine what individuals experience in the settings they inhabit. Context also helps determine the likelihood that those experiences will promote adaptive development. Theory suggests likely interplay between various aspects of home context and development of ideas about self that influence patterns of development for children. This study addressed relations between two aspects of home life (companionship and investment, modeling and encouragement) and three types of self-efficacy beliefs (enlisting social resources, independent learning, self-regulatory behavior) considered important for long-term adaptive functioning. The study focused on three groups of minority adolescents (Native American, African American, Latino). Relations were examined using regression models that also included four aspects of household risk that often hinder the development of self-efficacy. Although findings varied somewhat across the three groups, significant relations emerged between the two domains of home life examined and self-efficacy beliefs in all three groups, even controlling for overall household risk. Companionship and investment appeared particularly relevant for African American adolescents, while modeling and encouragement appeared particularly relevant for Native American adolescents. Both were relevant for Latino adolescents. © 2018 Family Process Institute.

  16. Prognoses of diameter and height of trees of eucalyptus using artificial intelligence.

    PubMed

    Vieira, Giovanni Correia; de Mendonça, Adriano Ribeiro; da Silva, Gilson Fernandes; Zanetti, Sidney Sára; da Silva, Mayra Marques; Dos Santos, Alexandre Rosa

    2018-04-01

    Models of individual trees are composed of sub-models that generally estimate competition, mortality, and growth in height and diameter of each tree. They are usually adopted when we want more detailed information to estimate forest multiproduct. In these models, estimates of growth in diameter at 1.30m above the ground (DBH) and total height (H) are obtained by regression analysis. Recently, artificial intelligence techniques (AIT) have been used with satisfactory performance in forest measurement. Therefore, the objective of this study was to evaluate the performance of two AIT, artificial neural networks and adaptive neuro-fuzzy inference system, to estimate the growth in DBH and H of eucalyptus trees. We used data of continuous forest inventories of eucalyptus, with annual measurements of DBH, H, and the dominant height of trees of 398 plots, plus two qualitative variables: genetic material and site index. It was observed that the two AIT showed accuracy in growth estimation of DBH and H. Therefore, the two techniques discussed can be used for the prognosis of DBH and H in even-aged eucalyptus stands. The techniques used could also be adapted to other areas and forest species. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The microbial temperature sensitivity to warming is controlled by thermal adaptation and is independent of C-quality across a pan-continental survey

    NASA Astrophysics Data System (ADS)

    Berglund, Eva; Rousk, Johannes

    2017-04-01

    Climate models predict that warming will result in an increased loss of soil organic matter (SOM). However, field experiments suggest that although warming results in an immediate increase in SOM turnover, the effect diminishes over time. Although the use and subsequent turnover of SOM is dominated by the soil microbial community, the underlying physiology underpinning warming responses are not considered in current climate models. It has been suggested that a reduction in the perceived quality of SOM to the microbial community, and changes in the microbial thermal adaptation, could be important feed-backs to soil warming. Thus, studies distinguishing between temperature relationships and how substrate quality influences microbial decomposition are a priority. We examined microbial communities and temperature sensitivities along a natural climate gradient including 56 independent samples from across Europe. The gradient included mean annual temperatures (MAT) from ca -4 to 18 ˚ C, along with wide spans of environmental factors known to influence microbial communities, such as pH (4.0 to 8.8), nutrients (C/N from 7 to 50), SOM (from 4 to 94%), and plant communities, etc. The extensive ranges of environmental conditions resulted in wide ranges of substrate quality, indexed as microbial respiration per unit SOM, from 5-150 μg CO2g-1 SOM g-1 h-1. We hypothesised microbial communities to (1) be adapted to the temperature of their climate, leading to warm adapted bacterial communities that were more temperature sensitive (higher Q10s) at higher MAT; (2) have temperature sensitivities affected by the quality of SOM, with higher Q10s for lower quality SOM. To determine the microbial use of SOM and its dependence on temperature, we characterized microbial temperature dependences of bacterial growth (leu inc), fungal growth (ac-in-erg) and soil respiration in all 56 sites. Temperature dependences were determined using brief (ca. 1-2 h at 25˚ C) laboratory incubation experiments including temperatures from 0 to 35˚ C. Temperature relationships were modelled using the Ratkowsky model, and cardinal points including minimum temperature (Tmin) for growth and respiration along with temperature sensitivity (Q10) values were used as indices to compare sites. Microbial communities were cold-adapted in cold sites and warm-adapted in warm sites, as shown by Tmin values ranging from ca. -20 ˚ C to 0 ˚ C. For every 1˚ C rise in MAT, Tmin increased by 0.22˚ C and 0.28˚ C for bacteria and fungi, respectively. Soil respiration was less dependent on MAT, increasing 0.16 ˚ C per 1˚ C. Temperature dependence analyses grew stronger when regressed against summer temperatures, and weaker when regressed against winter temperatures. Hence, microbial communities adjusted their temperature dependence for growth more than for respiration, and higher temperatures had more impact than low temperatures did. The correlation between Tmin and MAT resulted in Q10s increasing with MAT, showing that microorganisms from cold regions were less temperature sensitive than those from warmer regions. For every 1˚ C increase in MAT, Q10 increased with 0.04 and 0.03 units for bacterial and fungal growth respectively, and 0.08 units for soil respiration. In contrast to previous studies, we found no relationship between temperature sensitivity and substrate quality. We demonstrate that the strongest driver of variation in microbial temperatures sensitivities (Q10s) is the microbial adaptation to its thermal environment. Surprisingly, the quality of SOM had no influence on the temperature sensitivity. This calls for a revision of the understanding for how microbial decomposers feed-back to climate warming. Specifically, the thermal adaptation of microbial communities need to be incorporated into climate models to capture responses to warming, while the quality of SOM can be ignored.

  18. Reciprocal feedback regulation of PI3K and androgen receptor signaling in PTEN-deficient prostate cancer

    PubMed Central

    Carver, Brett S; Chapinski, Caren; Wongvipat, John; Hieronymus, Haley; Chen, Yu; Chandarlapaty, Sarat; Arora, Vivek K; Le, Carl; Koutcher, Jason; Scher, Howard; Scardino, Peter T; Rosen, Neal; Sawyers, Charles L

    2011-01-01

    Summary Prostate cancer is characterized by its dependence on androgen receptor and frequent activation of PI3K signaling. We find that AR transcriptional output is decreased in human and murine tumors with PTEN deletion and that PI3K pathway inhibition activates AR signaling by relieving feedback inhibition of HER kinases. Similarly, AR inhibition activates AKT signaling by reducing levels of the AKT phosphatase PHLPP. Thus, these two oncogenic pathways cross-regulate each other by reciprocal feedback. Inhibition of one activates the other, thereby maintaining tumor cell survival. However, combined pharmacologic inhibition of PI3K and AR signaling caused near complete prostate cancer regressions in a Pten-deficient murine prostate cancer model and in human prostate cancer xenografts, indicating that both pathways coordinately support survival. Significance The two most frequently activated signaling pathways in prostate cancer are driven by AR and PI3K. Inhibitors of the PI3K pathway are in early clinical trials and AR inhibitors confer clinical responses in most patients. However, these inhibitors rarely induce tumor regression in preclinical models. Here we show that these pathways regulate each other by reciprocal negative feedback, such that inhibition of one activates the other. Therefore, tumor cells can adapt and survive when either single pathway is inhibited pharmacologically. Our demonstration of profound tumor regressions with combined pathway inhibition in preclinical prostate tumor models provides rationale for combination therapy in patients. PMID:21575859

  19. [A competency model of rural general practitioners: theory construction and empirical study].

    PubMed

    Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei

    2015-04-01

    To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.

  20. Heat Waves and Climate Change: Applying the Health Belief Model to Identify Predictors of Risk Perception and Adaptive Behaviours in Adelaide, Australia

    PubMed Central

    Akompab, Derick A.; Bi, Peng; Williams, Susan; Grant, Janet; Walker, Iain A.; Augoustinos, Martha

    2013-01-01

    Heat waves are considered a health risk and they are likely to increase in frequency, intensity and duration as a consequence of climate change. The effects of heat waves on human health could be reduced if individuals recognise the risks and adopt healthy behaviours during a heat wave. The purpose of this study was to determine the predictors of risk perception using a heat wave scenario and identify the constructs of the health belief model that could predict adaptive behaviours during a heat wave. A cross-sectional study was conducted during the summer of 2012 among a sample of persons aged between 30 to 69 years in Adelaide. Participants’ perceptions were assessed using the health belief model as a conceptual frame. Their knowledge about heat waves and adaptive behaviours during heat waves was also assessed. Logistic regression analyses were performed to determine the predictors of risk perception to a heat wave scenario and adaptive behaviours during a heat wave. Of the 267 participants, about half (50.9%) had a high risk perception to heat waves while 82.8% had good adaptive behaviours during a heat wave. Multivariate models found that age was a significant predictor of risk perception. In addition, participants who were married (OR = 0.21; 95% CI, 0.07–0.62), who earned a gross annual household income of ≥$60,000 (OR = 0.41; 95% CI, 0.17–0.94) and without a fan (OR = 0.29; 95% CI, 0.11–0.79) were less likely to have a high risk perception to heat waves. Those who were living with others (OR = 2.87; 95% CI, 1.19–6.90) were more likely to have a high risk perception to heat waves. On the other hand, participants with a high perceived benefit (OR = 2.14; 95% CI, 1.00–4.58), a high “cues to action” (OR = 3.71; 95% CI, 1.63–8.43), who had additional training or education after high school (OR = 2.65; 95% CI, 1.25–5.58) and who earned a gross annual household income of ≥$60,000 (OR = 2.66; 95% CI, 1.07–6.56) were more likely to have good adaptive behaviours during a heat wave. The health belief model could be useful to guide the design and implementation of interventions to promote adaptive behaviours during heat waves. PMID:23759952

  1. Modeling new production in upwelling centers - A case study of modeling new production from remotely sensed temperature and color

    NASA Technical Reports Server (NTRS)

    Dugdale, Richard C.; Wilkerson, Frances P.; Morel, Andre; Bricaud, Annick

    1989-01-01

    A method has been developed for estimating new production in upwelling systems from remotely sensed surface temperatures. A shift-up model predicts the rate of adaptation of nitrate uptake. The time base for the production cycle is obtained from a knowledge of surface heating rates and differences in temperature between the point of upwelling and each pixel. Nitrate concentrations are obtained from temperature-nitrate regression equations. The model was developed for the northwest Africa upwelling region, where shipboard measurements of new production were available. It can be employed in two modes, the first using only surface temperatures, and the second in which CZCS color data are incorporated. The major advance offered by this method is the capability to estimate new production on spatial and time scales inaccessible with shipboard approaches.

  2. Doing better to do good: the impact of strategic adaptation on nursing home performance.

    PubMed

    Zinn, Jacqueline S; Mor, Vincent; Feng, Zhanlian; Intrator, Orna

    2007-06-01

    To test the hypothesis that a greater commitment to strategic adaptation, as exhibited by more extensive implementation of a subacute/rehabilitation care strategy in nursing homes, will be associated with superior performance. Online Survey, Certification, and Reporting (OSCAR) data from 1997 to 2004, and the area resource file (ARF). The extent of strategic adaptation was measured by an aggregate weighted implementation score. Nursing home performance was measured by occupancy rate and two measures of payer mix. We conducted multivariate regression analyses using a cross-sectional time series generalized estimating equation (GEE) model to examine the effect of nursing home strategic implementation on each of the three performance measures, controlling for market and organizational characteristics that could influence nursing home performance. DATA COLLECTION/ABSTRACTION METHODS: OSCAR data was merged with relevant ARF data. The results of our analysis provide strong support for the hypothesis. From a theoretical perspective, our findings confirm that organizations that adjust strategies and structures to better fit environmental demands achieve superior performance. From a managerial perspective, these results support the importance of proactive strategic leadership in the nursing home industry.

  3. Mosaic evolution and adaptive brain component alteration under domestication seen on the background of evolutionary theory.

    PubMed

    Rehkämper, Gerd; Frahm, Heiko D; Cnotka, Julia

    2008-01-01

    Brain sizes and brain component sizes of five domesticated pigeon breeds including homing (racing) pigeons are compared with rock doves (Columba livia) based on an allometric approach to test the influence of domestication on brain and brain component size. Net brain volume, the volumes of cerebellum and telencephalon as a whole are significantly smaller in almost all domestic pigeons. Inside the telencephalon, mesopallium, nidopallium (+ entopallium + arcopallium) and septum are smaller as well. The hippocampus is significantly larger, particularly in homing pigeons. This finding is in contrast to the predictions of the 'regression hypothesis' of brain alteration under domestication. Among the domestic pigeons homing pigeons have significantly larger olfactory bulbs. These data are interpreted as representing a functional adaptation to homing that is based on spatial cognition and sensory integration. We argue that domestication as seen in domestic pigeons is not principally different from evolution in the wild, but represents a heuristic model to understand the evolutionary process in terms of adaptation and optimization. Copyright 2007 S. Karger AG, Basel.

  4. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method

    NASA Astrophysics Data System (ADS)

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.

  5. Moisture Damage Modeling in Lime and Chemically Modified Asphalt at Nanolevel Using Ensemble Computational Intelligence

    PubMed Central

    2018-01-01

    This paper measures the adhesion/cohesion force among asphalt molecules at nanoscale level using an Atomic Force Microscopy (AFM) and models the moisture damage by applying state-of-the-art Computational Intelligence (CI) techniques (e.g., artificial neural network (ANN), support vector regression (SVR), and an Adaptive Neuro Fuzzy Inference System (ANFIS)). Various combinations of lime and chemicals as well as dry and wet environments are used to produce different asphalt samples. The parameters that were varied to generate different asphalt samples and measure the corresponding adhesion/cohesion forces are percentage of antistripping agents (e.g., Lime and Unichem), AFM tips K values, and AFM tip types. The CI methods are trained to model the adhesion/cohesion forces given the variation in values of the above parameters. To achieve enhanced performance, the statistical methods such as average, weighted average, and regression of the outputs generated by the CI techniques are used. The experimental results show that, of the three individual CI methods, ANN can model moisture damage to lime- and chemically modified asphalt better than the other two CI techniques for both wet and dry conditions. Moreover, the ensemble of CI along with statistical measurement provides better accuracy than any of the individual CI techniques. PMID:29849551

  6. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    PubMed

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  7. Using data mining to predict success in a weight loss trial.

    PubMed

    Batterham, M; Tapsell, L; Charlton, K; O'Shea, J; Thorne, R

    2017-08-01

    Traditional methods for predicting weight loss success use regression approaches, which make the assumption that the relationships between the independent and dependent (or logit of the dependent) variable are linear. The aim of the present study was to investigate the relationship between common demographic and early weight loss variables to predict weight loss success at 12 months without making this assumption. Data mining methods (decision trees, generalised additive models and multivariate adaptive regression splines), in addition to logistic regression, were employed to predict: (i) weight loss success (defined as ≥5%) at the end of a 12-month dietary intervention using demographic variables [body mass index (BMI), sex and age]; percentage weight loss at 1 month; and (iii) the difference between actual and predicted weight loss using an energy balance model. The methods were compared by assessing model parsimony and the area under the curve (AUC). The decision tree provided the most clinically useful model and had a good accuracy (AUC 0.720 95% confidence interval = 0.600-0.840). Percentage weight loss at 1 month (≥0.75%) was the strongest predictor for successful weight loss. Within those individuals losing ≥0.75%, individuals with a BMI (≥27 kg m -2 ) were more likely to be successful than those with a BMI between 25 and 27 kg m -2 . Data mining methods can provide a more accurate way of assessing relationships when conventional assumptions are not met. In the present study, a decision tree provided the most parsimonious model. Given that early weight loss cannot be predicted before randomisation, incorporating this information into a post randomisation trial design may give better weight loss results. © 2017 The British Dietetic Association Ltd.

  8. Regional modeling of large wildfires under current and potential future climates in Colorado and Wyoming, USA

    USGS Publications Warehouse

    West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.

    2016-01-01

    Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.

  9. Does conflict help or hurt cognitive control? Initial evidence for an inverted U-shape relationship between perceived task difficulty and conflict adaptation.

    PubMed

    van Steenbergen, Henk; Band, Guido P H; Hommel, Bernhard

    2015-01-01

    Sequential modulation of congruency effects in conflict tasks indicates that cognitive control quickly adapts to changing task demands. We investigated in four experiments how this behavioral congruency-sequence effect relates to different levels of perceived task difficulty in a flanker and a Stroop task. In addition, online measures of pupil diameter were used as a physiological index of effort mobilization. Consistent with motivational accounts predicting that increased levels of perceived task difficulty will increase effort mobilization only up to a certain limit, reliable dynamic conflict-driven adjustment in cognitive control was only observed when task difficulty was relatively low. Instead, tasks tentatively associated with high levels of difficulty showed no or reversed conflict adaptation. Although the effects could not be linked consistently to effects in self-reported task difficulty in all experiments, regression analyses showed associations between perceived task difficulty and conflict adaptation in some of the experiments, which provides some initial evidence for an inverted U-shape relationship between perceived difficulty and adaptations in cognitive control. Furthermore, high levels of task difficulty were associated with a conflict-driven reduction in pupil dilation, suggesting that pupil dilation can be used as a physiological marker of mental overload. Our findings underscore the importance of developing models that are grounded in motivational accounts of cognitive control.

  10. Adaptation and Evaluation of the Clinical Impairment Assessment to Assess Disordered Eating Related Distress in an Adolescent Female Ethnic Fijian Population

    PubMed Central

    Becker, Anne E; Thomas, Jennifer J; Bainivualiku, Asenaca; Richards, Lauren; Navara, Kesaia; Roberts, Andrea L; Gilman, Stephen E; Striegel-Moore, Ruth H

    2010-01-01

    Objective: Measurement of disease-related impairment and distress is central to diagnostic, therapeutic, and health policy considerations for eating disorders across diverse populations. This study evaluates psychometric properties of a translated and adapted version of the Clinical Impairment Assessment (CIA) in an ethnic Fijian population. Method: The adapted CIA was administered to ethnic Fijian adolescent schoolgirls (N = 215). We calculated Cronbach's α to assess the internal consistency, examined the association between indicators of eating disorder symptom severity and the CIA to assess construct and criterion validity, and compared the strength of relation between the CIA and measures of disordered eating versus with measures of generalized distress. Results: The Fijian version of the CIA is feasible to administer as an investigator-based interview. It has excellent internal consistency (α = 0.93). Both construct and criterion validity were supported by the data, and regression models indicated that the CIA predicts eating disorder severity, even when controlling for generalized distress and psychopathology. Discussion: The adapted CIA has excellent psychometric properties in this Fijian study population. Findings suggest that the CIA can be successfully adapted for use in a non-Western study population and that at least some associated distress and impairment transcends cultural differences. © 2009 by Wiley Periodicals, Inc. Int J Eat Disord, 2010 PMID:19308992

  11. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  12. Does conflict help or hurt cognitive control? Initial evidence for an inverted U-shape relationship between perceived task difficulty and conflict adaptation

    PubMed Central

    van Steenbergen, Henk; Band, Guido P. H.; Hommel, Bernhard

    2015-01-01

    Sequential modulation of congruency effects in conflict tasks indicates that cognitive control quickly adapts to changing task demands. We investigated in four experiments how this behavioral congruency-sequence effect relates to different levels of perceived task difficulty in a flanker and a Stroop task. In addition, online measures of pupil diameter were used as a physiological index of effort mobilization. Consistent with motivational accounts predicting that increased levels of perceived task difficulty will increase effort mobilization only up to a certain limit, reliable dynamic conflict-driven adjustment in cognitive control was only observed when task difficulty was relatively low. Instead, tasks tentatively associated with high levels of difficulty showed no or reversed conflict adaptation. Although the effects could not be linked consistently to effects in self-reported task difficulty in all experiments, regression analyses showed associations between perceived task difficulty and conflict adaptation in some of the experiments, which provides some initial evidence for an inverted U-shape relationship between perceived difficulty and adaptations in cognitive control. Furthermore, high levels of task difficulty were associated with a conflict-driven reduction in pupil dilation, suggesting that pupil dilation can be used as a physiological marker of mental overload. Our findings underscore the importance of developing models that are grounded in motivational accounts of cognitive control. PMID:26217287

  13. Understanding the Acculturation Experience of Chinese Adolescent Students: Sociocultural Adaptation Strategies and a Positive Bicultural and Bilingual Identity

    ERIC Educational Resources Information Center

    Tong, Virginia M.

    2014-01-01

    The acculturation of Chinese immigrant high school students was examined as it relates to students' level of interaction with teachers and peers and participation in American school activities. Findings from a regression analysis revealed five variables (sociocultural adaptation strategies) that facilitate students' adjustment process:…

  14. Rationalizing Regression in Racial Equity: James Traub Reports on Proposition 209 in California. Article Review.

    ERIC Educational Resources Information Center

    Dentler, Robert A.

    1999-01-01

    James Traub, in his report for the "New York Times Magazine," suggests that some "adaptive" techniques, such as cascading applicants to other colleges and particular administrative approaches, will provide the benefits of affirmative action without its drawbacks. This is a misleading message. Adaptive techniques will not offset…

  15. Predictors of Career Adaptability Skill among Higher Education Students in Nigeria

    ERIC Educational Resources Information Center

    Ebenehi, Amos Shaibu; Rashid, Abdullah Mat; Bakar, Ab Rahim

    2016-01-01

    This paper examined predictors of career adaptability skill among higher education students in Nigeria. A sample of 603 higher education students randomly selected from six colleges of education in Nigeria participated in this study. A set of self-reported questionnaire was used for data collection, and multiple linear regression analysis was used…

  16. Intelligent control and adaptive systems; Proceedings of the Meeting, Philadelphia, PA, Nov. 7, 8, 1989

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor)

    1990-01-01

    Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.

  17. Neuronal Regression of Internal Leg Vibroreceptor Organs in a Cave-Dwelling Insect (Orthoptera: Rhaphidophoridae: Dolichopoda araneiformis).

    PubMed

    Strauß, Johannes; Stritih, Nataša

    2017-01-01

    Animals' adaptations to cave habitats generally include elaboration of extraoptic senses, and in insects the receptor structures located on the legs are supposed to become more prominent in response to constant darkness. The receptors for detecting substrate vibrations are often highly sensitive scolopidial sensilla localized within the legs or the body. For troglobitic insects the evolutionary changes in vibroreceptor organs have not been studied. Since rock is an extremely unfavorable medium for vibration transmission, selection on vibration receptors may be weakened in caves, and these sensory organs may undergo regressive evolution. We investigated the anatomy of the most elaborate internal vibration detection system in orthopteroid insects, the scolopidial subgenual organ complex in the cave cricket Dolichopoda araneiformis (Orthoptera: Ensifera: Rhaphidophoridae). This is a suitable model species which shows high levels of adaptation to cave life in terms of both phenotypic and life cycle characteristics. We compared our data with data on the anatomy and physiology of the subgenual organ complex from the related troglophilic species Troglophilus neglectus. In D. araneiformis, the subgenual organ complex contains three scolopidial organs: the subgenual organ, the intermediate organ, and the accessory organ. The presence of individual organs and their innervation pattern are identical to those found in T. neglectus, while the subgenual organ and the accessory organ of D. araneiformis contain about 50% fewer scolopidial sensilla than in T. neglectus. This suggests neuronal regression of these organs in D. araneiformis, which may reflect a relaxed selection pressure for vibration detection in caves. At the same time, a high level of overall neuroanatomical conservation of the intermediate organ in this species suggests persistence of the selection pressure maintaining this particular organ. While regressive evolution of chordotonal organs has been documented for insect auditory organs, this study shows for the first time that internal vibroreceptors can also be affected. © 2017 S. Karger AG, Basel.

  18. A gap-filling model for eddy covariance latent heat flux: Estimating evapotranspiration of a subtropical seasonal evergreen broad-leaved forest as an example

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ying; Chu, Chia-Ren; Li, Ming-Hsu

    2012-10-01

    SummaryIn this paper we present a semi-parametric multivariate gap-filling model for tower-based measurement of latent heat flux (LE). Two statistical techniques, the principal component analysis (PCA) and a nonlinear interpolation approach were integrated into this LE gap-filling model. The PCA was first used to resolve the multicollinearity relationships among various environmental variables, including radiation, soil moisture deficit, leaf area index, wind speed, etc. Two nonlinear interpolation methods, multiple regressions (MRS) and the K-nearest neighbors (KNNs) were examined with random selected flux gaps for both clear sky and nighttime/cloudy data to incorporate into this LE gap-filling model. Experimental results indicated that the KNN interpolation approach is able to provide consistent LE estimations while MRS presents over estimations during nighttime/cloudy. Rather than using empirical regression parameters, the KNN approach resolves the nonlinear relationship between the gap-filled LE flux and principal components with adaptive K values under different atmospheric states. The developed LE gap-filling model (PCA with KNN) works with a RMSE of 2.4 W m-2 (˜0.09 mm day-1) at a weekly time scale by adding 40% artificial flux gaps into original dataset. Annual evapotranspiration at this study site were estimated at 736 mm (1803 MJ) and 728 mm (1785 MJ) for year 2008 and 2009, respectively.

  19. Estimation of Mangrove Forest Aboveground Biomass Using Multispectral Bands, Vegetation Indices and Biophysical Variables Derived from Optical Satellite Imageries: Rapideye, Planetscope and SENTINEL-2

    NASA Astrophysics Data System (ADS)

    Balidoy Baloloy, Alvin; Conferido Blanco, Ariel; Gumbao Candido, Christian; Labadisos Argamosa, Reginal Jay; Lovern Caboboy Dumalag, John Bart; Carandang Dimapilis, Lee, , Lady; Camero Paringit, Enrico

    2018-04-01

    Aboveground biomass estimation (AGB) is essential in determining the environmental and economic values of mangrove forests. Biomass prediction models can be developed through integration of remote sensing, field data and statistical models. This study aims to assess and compare the biomass predictor potential of multispectral bands, vegetation indices and biophysical variables that can be derived from three optical satellite systems: the Sentinel-2 with 10 m, 20 m and 60 m resolution; RapidEye with 5m resolution and PlanetScope with 3m ground resolution. Field data for biomass were collected from a Rhizophoraceae-dominated mangrove forest in Masinloc, Zambales, Philippines where 30 test plots (1.2 ha) and 5 validation plots (0.2 ha) were established. Prior to the generation of indices, images from the three satellite systems were pre-processed using atmospheric correction tools in SNAP (Sentinel-2), ENVI (RapidEye) and python (PlanetScope). The major predictor bands tested are Blue, Green and Red, which are present in the three systems; and Red-edge band from Sentinel-2 and Rapideye. The tested vegetation index predictors are Normalized Differenced Vegetation Index (NDVI), Soil-adjusted Vegetation Index (SAVI), Green-NDVI (GNDVI), Simple Ratio (SR), and Red-edge Simple Ratio (SRre). The study generated prediction models through conventional linear regression and multivariate regression. Higher coefficient of determination (r2) values were obtained using multispectral band predictors for Sentinel-2 (r2 = 0.89) and Planetscope (r2 = 0.80); and vegetation indices for RapidEye (r2 = 0.92). Multivariate Adaptive Regression Spline (MARS) models performed better than the linear regression models with r2 ranging from 0.62 to 0.92. Based on the r2 and root-mean-square errors (RMSE's), the best biomass prediction model per satellite were chosen and maps were generated. The accuracy of predicted biomass maps were high for both Sentinel-2 (r2 = 0.92) and RapidEye data (r2 = 0.91).

  20. Fish habitat regression under water scarcity scenarios in the Douro River basin

    NASA Astrophysics Data System (ADS)

    Segurado, Pedro; Jauch, Eduardo; Neves, Ramiro; Ferreira, Teresa

    2015-04-01

    Climate change will predictably alter hydrological patterns and processes at the catchment scale, with impacts on habitat conditions for fish. The main goals of this study are to identify the stream reaches that will undergo more pronounced flow reduction under different climate change scenarios and to assess which fish species will be more affected by the consequent regression of suitable habitats. The interplay between changes in flow and temperature and the presence of transversal artificial obstacles (dams and weirs) is analysed. The results will contribute to river management and impact mitigation actions under climate change. This study was carried out in the Tâmega catchment of the Douro basin. A set of 29 Hydrological, climatic, and hydrogeomorphological variables were modelled using a water modelling system (MOHID), based on meteorological data recorded monthly between 2008 and 2014. The same variables were modelled considering future climate change scenarios. The resulting variables were used in empirical habitat models of a set of key species (brown trout Salmo trutta fario, barbell Barbus bocagei, and nase Pseudochondrostoma duriense) using boosted regression trees. The stream segments between tributaries were used as spatial sampling units. Models were developed for the whole Douro basin using 401 fish sampling sites, although the modelled probabilities of species occurrence for each stream segment were predicted only for the Tâmega catchment. These probabilities of occurrence were used to classify stream segments into suitable and unsuitable habitat for each fish species, considering the future climate change scenario. The stream reaches that were predicted to undergo longer flow interruptions were identified and crossed with the resulting predictive maps of habitat suitability to compute the total area of habitat loss per species. Among the target species, the brown trout was predicted to be the most sensitive to habitat regression due to the interplay of flow reduction, increase of temperature and transversal barriers. This species is therefore a good indicator of climate change impacts in rivers and therefore we recommend using this species as a target of monitoring programs to be implemented in the context of climate change adaptation strategies.

  1. Dynamic Filtering Improves Attentional State Prediction with fNIRS

    NASA Technical Reports Server (NTRS)

    Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.

    2016-01-01

    Brain activity can predict a person's level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise - thereby increasing such state prediction accuracy - remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% +/- 6% versus 72% +/- 15%).

  2. Modeling rainfall-runoff process using soft computing techniques

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa

    2013-02-01

    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  3. Dissociating Conflict Adaptation from Feature Integration: A Multiple Regression Approach

    ERIC Educational Resources Information Center

    Notebaert, Wim; Verguts, Tom

    2007-01-01

    Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on…

  4. Aggressive Behavior in Response to Violence Exposure: Is It Adaptive for Middle-School Children?

    ERIC Educational Resources Information Center

    Salzinger, Suzanne; Rosario, Margaret; Feldman, Richard S.; Ng-Mak, Daisy S.

    2008-01-01

    The role of aggression in adaptation to family and community violence was examined in a sample of 667 inner-city schoolchildren studied annually over three years in middle school. Regression analyses indicated that the association between Year 1 exposure to family and community violence and Year 2 aggression was mediated by aggression occurring…

  5. Study on the social adaptation of Chinese children with down syndrome.

    PubMed

    Wang, Yan-Xia; Mao, Shan-Shan; Xie, Chun-Hong; Qin, Yu-Feng; Zhu, Zhi-Wei; Zhan, Jian-Ying; Shao, Jie; Li, Rong; Zhao, Zheng-Yan

    2007-06-30

    To evaluate social adjustment and related factors among Chinese children with Down syndrome (DS). A structured interview and Peabody Picture Vocabulary Test (PPVT) were conducted with a group of 36 DS children with a mean age of 106.28 months, a group of 30 normally-developing children matched for mental age (MA) and a group of 40 normally-developing children matched for chronological age (CA). Mean scores of social adjustment were compared between the three groups, and partial correlations and stepwise multiple regression models were used to further explore related factors. There was no difference between the DS group and the MA group in terms of communication skills. However, the DS group scored much better than the MA group in self-dependence, locomotion, work skills, socialization and self-management. Children in the CA group achieved significantly higher scores in all aspects of social adjustment than the DS children. Partial correlations indicate a relationship between social adjustment and the PPVT raw score and also between social adjustment and age (significant r ranging between 0.24 and 0.92). A stepwise linear regression analysis showed that family structure was the main predictor of social adjustment. Newborn history was also a predictor of work skills, communication, socialization and self-management. Parental education was found to account for 8% of self-dependence. Maternal education explained 6% of the variation in locomotion. Although limited by the small sample size, these results indicate that Chinese DS children have better social adjustment skills when compared to their mental-age-matched normally-developing peers, but that the Chinese DS children showed aspects of adaptive development that differed from Western DS children. Analyses of factors related to social adjustment suggest that effective early intervention may improve social adaptability.

  6. The slow light and dark oscillation of the clinical electro-oculogram.

    PubMed

    Constable, Paul A; Ngo, David

    2018-05-20

    The standing potential of the eye exhibits a slow damped oscillation under light and dark conditions that continues for at least 80 minutes. However, our understanding of the relationship between the slow dark and light oscillation has not been previously studied. The aim of this study was to explore through regression analysis a model of these oscillations in order to establish if they may have the same underlying cellular generators. Healthy participants undertook recordings of the standing potential using the electro-oculogram for 100 minutes. To explore the light oscillation, participants (n = 8) were dilated and performed an extended electro-oculogram protocol consisting of 15 minutes dark adaptation and 85 minutes of white light adaptation at 100 cd/m 2 . For the dark oscillation, participants (n = 11) undertook the electro-oculogram for 100 minutes in complete darkness. Both sessions began with pre-adaptation to 30 cd/m 2 of white light for five minutes. Non-parametric statistics were used to evaluate all data. Ratios of the dark and light oscillations showed a significantly greater dampening of the dark oscillation compared to the light oscillation (p < 0.000). Regression analysis using a five-factor damped sine function revealed significant differences in the parameters governing the dampening (p = 0.005) and period (p = 0.009) of the functions (R 2  > 0.874). There were no significant differences in the dark trough amplitude. The results support a different underlying physiological mechanism for the light and dark oscillation of the clinical electro-oculogram. Future work will need to establish how the dark oscillation and dark trough of the clinical electro-oculogram arise. © 2018 Optometry Australia.

  7. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  9. Statistical Approaches for Spatiotemporal Prediction of Low Flows

    NASA Astrophysics Data System (ADS)

    Fangmann, A.; Haberlandt, U.

    2017-12-01

    An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.

  10. A Driving Behaviour Model of Electrical Wheelchair Users

    PubMed Central

    Hamam, Y.; Djouani, K.; Daachi, B.; Steyn, N.

    2016-01-01

    In spite of the presence of powered wheelchairs, some of the users still experience steering challenges and manoeuvring difficulties that limit their capacity of navigating effectively. For such users, steering support and assistive systems may be very necessary. To appreciate the assistance, there is need that the assistive control is adaptable to the user's steering behaviour. This paper contributes to wheelchair steering improvement by modelling the steering behaviour of powered wheelchair users, for integration into the control system. More precisely, the modelling is based on the improved Directed Potential Field (DPF) method for trajectory planning. The method has facilitated the formulation of a simple behaviour model that is also linear in parameters. To obtain the steering data for parameter identification, seven individuals participated in driving the wheelchair in different virtual worlds on the augmented platform. The obtained data facilitated the estimation of user parameters, using the ordinary least square method, with satisfactory regression analysis results. PMID:27148362

  11. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  12. Comparison of different modelling approaches of drive train temperature for the purposes of wind turbine failure detection

    NASA Astrophysics Data System (ADS)

    Tautz-Weinert, J.; Watson, S. J.

    2016-09-01

    Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.

  13. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  14. Prediction on carbon dioxide emissions based on fuzzy rules

    NASA Astrophysics Data System (ADS)

    Pauzi, Herrini; Abdullah, Lazim

    2014-06-01

    There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

  15. Prediction of adaptive self-regulatory responses to arthritis pain anxiety in exercising adults: does pain acceptance matter?

    PubMed

    Cary, Miranda Ashley; Gyurcsik, Nancy C; Brawley, Lawrence R

    2015-01-01

    Exercising for ≥ 150 min/week is a recommended strategy for self-managing arthritis. However, exercise nonadherence is a problem. Arthritis pain anxiety may interfere with regular exercise. According to the fear-avoidance model, individuals may confront their pain anxiety by using adaptive self-regulatory responses (eg, changing exercise type or duration). Furthermore, the anxiety-self-regulatory responses relationship may vary as a function of individuals' pain acceptance levels. To investigate pain acceptance as a moderator of the pain anxiety-adaptive self-regulatory responses relationship. The secondary objective was to examine whether groups of patients who differed in meeting exercise recommendations also differed in pain-related and self-regulatory responses. Adults (mean [± SD] age 49.75 ± 13.88 years) with medically diagnosed arthritis completed online measures of arthritis pain-related variables and self-regulatory responses at baseline, and exercise participation two weeks later. Individuals meeting (n=87) and not meeting (n=49) exercise recommendations were identified. Hierarchical multiple regression analysis revealed that pain acceptance moderated the anxiety-adaptive self-regulatory responses relationship. When pain anxiety was lower, greater pain acceptance was associated with less frequent use of adaptive responses. When anxiety was higher, adaptive responses were used regardless of pain acceptance level. MANOVA findings revealed that participants meeting the recommended exercise dose reported significantly lower pain and pain anxiety, and greater pain acceptance (P<0.05) than those not meeting the dose. Greater pain acceptance may help individuals to focus their efforts to adapt to their pain anxiety only when it is higher, leaving self-regulatory capacity to cope with additional challenges to exercise adherence (eg, busy schedule).

  16. Prediction of adaptive self-regulatory responses to arthritis pain anxiety in exercising adults: Does pain acceptance matter?

    PubMed Central

    Cary, Miranda A; Gyurcsik, Nancy C; Brawley, Lawrence R

    2015-01-01

    BACKGROUND: Exercising for ≥150 min/week is a recommended strategy for self-managing arthritis. However, exercise nonadherence is a problem. Arthritis pain anxiety may interfere with regular exercise. According to the fear-avoidance model, individuals may confront their pain anxiety by using adaptive self-regulatory responses (eg, changing exercise type or duration). Furthermore, the anxiety-self-regulatory responses relationship may vary as a function of individuals’ pain acceptance levels. OBJECTIVES: To investigate pain acceptance as a moderator of the pain anxiety-adaptive self-regulatory responses relationship. The secondary objective was to examine whether groups of patients who differed in meeting exercise recommendations also differed in pain-related and self-regulatory responses. METHODS: Adults (mean [± SD] age 49.75±13.88 years) with medically diagnosed arthritis completed online measures of arthritis pain-related variables and self-regulatory responses at baseline, and exercise participation two weeks later. Individuals meeting (n=87) and not meeting (n=49) exercise recommendations were identified. RESULTS: Hierarchical multiple regression analysis revealed that pain acceptance moderated the anxiety-adaptive self-regulatory responses relationship. When pain anxiety was lower, greater pain acceptance was associated with less frequent use of adaptive responses. When anxiety was higher, adaptive responses were used regardless of pain acceptance level. MANOVA findings revealed that participants meeting the recommended exercise dose reported significantly lower pain and pain anxiety, and greater pain acceptance (P<0.05) than those not meeting the dose. CONCLUSIONS: Greater pain acceptance may help individuals to focus their efforts to adapt to their pain anxiety only when it is higher, leaving self-regulatory capacity to cope with additional challenges to exercise adherence (eg, busy schedule). PMID:25621990

  17. Rapid detection of talcum powder in tea using FT-IR spectroscopy coupled with chemometrics

    PubMed Central

    Li, Xiaoli; Zhang, Yuying; He, Yong

    2016-01-01

    This paper investigated the feasibility of Fourier transform infrared transmission (FT-IR) spectroscopy to detect talcum powder illegally added in tea based on chemometric methods. Firstly, 210 samples of tea powder with 13 dose levels of talcum powder were prepared for FT-IR spectra acquirement. In order to highlight the slight variations in FT-IR spectra, smoothing, normalize and standard normal variate (SNV) were employed to preprocess the raw spectra. Among them, SNV preprocessing had the best performance with high correlation of prediction (RP = 0.948) and low root mean square error of prediction (RMSEP = 0.108) of partial least squares (PLS) model. Then 18 characteristic wavenumbers were selected based on a hybrid of backward interval partial least squares (biPLS) regression, competitive adaptive reweighted sampling (CARS) algorithm and successive projections algorithm (SPA). These characteristic wavenumbers only accounted for 0.64% of the full wavenumbers. Following that, 18 characteristic wavenumbers were used to build linear and nonlinear determination models by PLS regression and extreme learning machine (ELM), respectively. The optimal model with RP = 0.963 and RMSEP = 0.137 was achieved by ELM algorithm. These results demonstrated that FT-IR spectroscopy with chemometrics could be used successfully to detect talcum powder in tea. PMID:27468701

  18. Simultaneous Estimation of Electromechanical Modes and Forced Oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follum, Jim; Pierre, John W.; Martin, Russell

    Over the past several years, great strides have been made in the effort to monitor the small-signal stability of power systems. These efforts focus on estimating electromechanical modes, which are a property of the system that dictate how generators in different parts of the system exchange energy. Though the algorithms designed for this task are powerful and important for reliable operation of the power system, they are susceptible to severe bias when forced oscillations are present in the system. Forced oscillations are fundamentally different from electromechanical oscillations in that they are the result of a rogue input to the system,more » rather than a property of the system itself. To address the presence of forced oscillations, the frequently used AutoRegressive Moving Average (ARMA) model is adapted to include sinusoidal inputs, resulting in the AutoRegressive Moving Average plus Sinusoid (ARMA+S) model. From this model, a new Two-Stage Least Squares algorithm is derived to incorporate the forced oscillations, thereby enabling the simultaneous estimation of the electromechanical modes and the amplitude and phase of the forced oscillations. The method is validated using simulated power system data as well as data obtained from the western North American power system (wNAPS) and Eastern Interconnection (EI).« less

  19. Groundwater potential mapping using C5.0, random forest, and multivariate adaptive regression spline models in GIS.

    PubMed

    Golkarian, Ali; Naghibi, Seyed Amir; Kalantar, Bahareh; Pradhan, Biswajeet

    2018-02-17

    Ever increasing demand for water resources for different purposes makes it essential to have better understanding and knowledge about water resources. As known, groundwater resources are one of the main water resources especially in countries with arid climatic condition. Thus, this study seeks to provide groundwater potential maps (GPMs) employing new algorithms. Accordingly, this study aims to validate the performance of C5.0, random forest (RF), and multivariate adaptive regression splines (MARS) algorithms for generating GPMs in the eastern part of Mashhad Plain, Iran. For this purpose, a dataset was produced consisting of spring locations as indicator and groundwater-conditioning factors (GCFs) as input. In this research, 13 GCFs were selected including altitude, slope aspect, slope angle, plan curvature, profile curvature, topographic wetness index (TWI), slope length, distance from rivers and faults, rivers and faults density, land use, and lithology. The mentioned dataset was divided into two classes of training and validation with 70 and 30% of the springs, respectively. Then, C5.0, RF, and MARS algorithms were employed using R statistical software, and the final values were transformed into GPMs. Finally, two evaluation criteria including Kappa and area under receiver operating characteristics curve (AUC-ROC) were calculated. According to the findings of this research, MARS had the best performance with AUC-ROC of 84.2%, followed by RF and C5.0 algorithms with AUC-ROC values of 79.7 and 77.3%, respectively. The results indicated that AUC-ROC values for the employed models are more than 70% which shows their acceptable performance. As a conclusion, the produced methodology could be used in other geographical areas. GPMs could be used by water resource managers and related organizations to accelerate and facilitate water resource exploitation.

  20. Empirical extensions of the lasso penalty to reduce the false discovery rate in high-dimensional Cox regression models.

    PubMed

    Ternès, Nils; Rotolo, Federico; Michiels, Stefan

    2016-07-10

    Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Zn-metalloprotease sequences in extremophiles

    NASA Astrophysics Data System (ADS)

    Holden, T.; Dehipawala, S.; Golebiewska, U.; Cheung, E.; Tremberger, G., Jr.; Williams, E.; Schneider, P.; Gadura, N.; Lieberman, D.; Cheung, T.

    2010-09-01

    The Zn-metalloprotease family contains conserved amino acid structures such that the nucleotide fluctuation at the DNA level would exhibit correlated randomness as described by fractal dimension. A nucleotide sequence fractal dimension can be calculated from a numerical series consisting of the atomic numbers of each nucleotide. The structure's vibration modes can also be studied using a Gaussian Network Model. The vibration measure and fractal dimension values form a two-dimensional plot with a standard vector metric that can be used for comparison of structures. The preference for amino acid usage in extremophiles may suppress nucleotide fluctuations that could be analyzed in terms of fractal dimension and Shannon entropy. A protein level cold adaptation study of the thermolysin Zn-metalloprotease family using molecular dynamics simulation was reported recently and our results show that the associated nucleotide fluctuation suppression is consistent with a regression pattern generated from the sequences's fractal dimension and entropy values (R-square { 0.98, N =5). It was observed that cold adaptation selected for high entropy and low fractal dimension values. Extension to the Archaemetzincin M54 family in extremophiles reveals a similar regression pattern (R-square = 0.98, N = 6). It was observed that the metalloprotease sequences of extremely halophilic organisms possess high fractal dimension and low entropy values as compared with non-halophiles. The zinc atom is usually bonded to the histidine residue, which shows limited levels of vibration in the Gaussian Network Model. The variability of the fractal dimension and entropy for a given protein structure suggests that extremophiles would have evolved after mesophiles, consistent with the bias usage of non-prebiotic amino acids by extremophiles. It may be argued that extremophiles have the capacity to offer extinction protection during drastic changes in astrobiological environments.

  2. The influence of ethical values and food choice motivations on intentions to purchase sustainably sourced foods.

    PubMed

    Dowd, Kylie; Burke, Karena J

    2013-10-01

    This study examined a three-step adaptation of the Theory of Planned Behaviour (TPB) applied to the intention of consumers to purchase sustainably sourced food. The sample consisted of 137 participants, of which 109 were female, who were recruited through a farmers market and an organic produce outlet in an Australian capital city. Participants completed an online questionnaire containing the TPB scales of attitude, subjective norms, perceived behavioural control and intention; measures of positive moral attitude and ethical self identity; and food choice motives. Hierarchical multiple regression was used to examine the predictive utility of the TPB in isolation (step 1) and the TPB expanded to include the constructs of moral attitude and ethical self-identity (step 2). The results indicated the expansion of the TPB to include these constructs added significantly to the predictive model measuring intention to purchase sustainably sourced food. The third step in the adaptation utilised this expanded TPB model and added a measure of retail channel (where consumers reported buying fresh produce) and 9 food choice motives, in order to assess the predictive utility of the inclusion of choice motivations in this context. Of the 8 food choice motives examined, only health and ethical values significantly predicted intention to purchase sustainably sourced food. However, with the addition of food choice motives, ethical self-identity was no longer a significant predictor of intention to purchase sustainably sourced food. Overall the adapted TPB model explained 76% of the variance in intention to purchase sustainably sourced food. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  4. Met expectations and the wellbeing of diaspora immigrants: a longitudinal study.

    PubMed

    Mähönen, Tuuli Anna; Leinonen, Elina; Jasinskaja-Lahti, Inga

    2013-01-01

    Previous research has pointed to the importance of expectations for the adaptation of immigrants. However, most studies have been methodologically retrospective with only limited possibilities to show the optimal relationship between migrants' expectations and actual acculturation experiences for their wellbeing and other aspects of psychological adaptation. Moreover, previous research has been conducted mostly among sojourners and students. This longitudinal study focused on the relationship between premigration expectations and postmigration experiences of diaspora immigrants from Russia to Finland (N = 153). We examined how the fulfillment of premigration expectations in social (i.e., family relations, friendships, and free time) and economic (i.e., occupational position, working conditions, and economic and career situation) domains affects immigrants' wellbeing (i.e., satisfaction with life and general mood) after migration. Three alternative models of expectation confirmation (i.e., disconfirmation model, ideal point model, and the importance of experiences only) derived from previous organizational psychological research were tested with polynomial regression and response surface analysis. In the economic domain, immigrants' expectations, experiences, and their interrelationship did not affect wellbeing in the postmigration stage. However, in the social domain, the more expectations were exceeded by actual experiences, the better were life satisfaction and the general mood of immigrants. The results underline the importance of social relationships and the context-dependent nature of immigrants' wellbeing. Interventions in the preacculturation stage should create positive but realistic expectations for diaspora immigrants and other groups of voluntary (re)migrants. Furthermore, policies concerning the postmigration stage should facilitate the fulfillment of these expectations and support the social adaptation of immigrants.

  5. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method.

    PubMed

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    USGS Publications Warehouse

    Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.

    2009-01-01

    Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional vegetation changes on area burned to account more fully for interactions among fire, climate, and vegetation dynamics. ?? 2009 The Authors Journal compilation ?? 2009 Blackwell Publishing Ltd.

  7. Interest level in 2-year-olds with autism spectrum disorder predicts rate of verbal, nonverbal, and adaptive skill acquisition.

    PubMed

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2015-11-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2 years predicts subsequent rates of verbal, nonverbal, and adaptive skill acquisition to the age of 3 years. A total of 70 toddlers with autism spectrum disorder, mean age of 21.9 months, were scored using Interest Level Scoring for Autism, quantifying toddlers' interest in toys, social routines, and activities that could serve as reinforcers in an intervention. Adaptive level and mental age were measured concurrently (Time 1) and again after a mean of 16.3 months of treatment (Time 2). Interest Level Scoring for Autism score, Autism Diagnostic Observation Schedule score, adaptive age equivalent, verbal and nonverbal mental age, and intensity of intervention were entered into regression models to predict rates of skill acquisition. Interest level at Time 1 predicted subsequent acquisition rate of adaptive skills (R(2) = 0.36) and verbal mental age (R(2) = 0.30), above and beyond the effects of Time 1 verbal and nonverbal mental ages and Autism Diagnostic Observation Schedule scores. Interest level at Time 1 also contributed (R(2) = 0.30), with treatment intensity, to variance in development of nonverbal mental age. © The Author(s) 2014.

  8. Artificial Neural Networks: A Novel Approach to Analysing the Nutritional Ecology of a Blowfly Species, Chrysomya megacephala

    PubMed Central

    Bianconi, André; Zuben, Cláudio J. Von; Serapião, Adriane B. de S.; Govone, José S.

    2010-01-01

    Bionomic features of blowflies may be clarified and detailed by the deployment of appropriate modelling techniques such as artificial neural networks, which are mathematical tools widely applied to the resolution of complex biological problems. The principal aim of this work was to use three well-known neural networks, namely Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Adaptive Neural Network-Based Fuzzy Inference System (ANFIS), to ascertain whether these tools would be able to outperform a classical statistical method (multiple linear regression) in the prediction of the number of resultant adults (survivors) of experimental populations of Chrysomya megacephala (F.) (Diptera: Calliphoridae), based on initial larval density (number of larvae), amount of available food, and duration of immature stages. The coefficient of determination (R2) derived from the RBF was the lowest in the testing subset in relation to the other neural networks, even though its R2 in the training subset exhibited virtually a maximum value. The ANFIS model permitted the achievement of the best testing performance. Hence this model was deemed to be more effective in relation to MLP and RBF for predicting the number of survivors. All three networks outperformed the multiple linear regression, indicating that neural models could be taken as feasible techniques for predicting bionomic variables concerning the nutritional dynamics of blowflies. PMID:20569135

  9. Mathematical and computational model for the analysis of micro hybrid rocket motor

    NASA Astrophysics Data System (ADS)

    Stoia-Djeska, Marius; Mingireanu, Florin

    2012-11-01

    The hybrid rockets use a two-phase propellant system. In the present work we first develop a simplified model of the coupling of the hybrid combustion process with the complete unsteady flow, starting from the combustion port and ending with the nozzle. The physical and mathematical model are adapted to the simulations of micro hybrid rocket motors. The flow model is based on the one-dimensional Euler equations with source terms. The flow equations and the fuel regression rate law are solved in a coupled manner. The platform of the numerical simulations is an implicit fourth-order Runge-Kutta second order cell-centred finite volume method. The numerical results obtained with this model show a good agreement with published experimental and numerical results. The computational model developed in this work is simple, computationally efficient and offers the advantage of taking into account a large number of functional and constructive parameters that are used by the engineers.

  10. Predictive Models for Regional Hepatic Function Based on 99mTc-IDA SPECT and Local Radiation Dose for Physiologic Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Frey, Kirk A.

    2013-08-01

    Purpose: High-dose radiation therapy (RT) for intrahepatic cancer is limited by the development of liver injury. This study investigated whether regional hepatic function assessed before and during the course of RT using 99mTc-labeled iminodiacetic acid (IDA) single photon emission computed tomography (SPECT) could predict regional liver function reserve after RT. Methods and Materials: Fourteen patients treated with RT for intrahepatic cancers underwent dynamic 99mTc-IDA SPECT scans before RT, during, and 1 month after completion of RT. Indocyanine green (ICG) tests, a measure of overall liver function, were performed within 1 day of each scan. Three-dimensional volumetric hepatic extraction fraction (HEF)more » images of the liver were estimated by deconvolution analysis. After coregistration of the CT/SPECT and the treatment planning CT, HEF dose–response functions during and after RT were generated. The volumetric mean of the HEFs in the whole liver was correlated with ICG clearance time. Three models, dose, priori, and adaptive models, were developed using multivariate linear regression to assess whether the regional HEFs measured before and during RT helped predict regional hepatic function after RT. Results: The mean of the volumetric liver HEFs was significantly correlated with ICG clearance half-life time (r=−0.80, P<.0001), for all time points. Linear correlations between local doses and regional HEFs 1 month after RT were significant in 12 patients. In the priori model, regional HEF after RT was predicted by the planned dose and regional HEF assessed before RT (R=0.71, P<.0001). In the adaptive model, regional HEF after RT was predicted by regional HEF reassessed during RT and the remaining planned local dose (R=0.83, P<.0001). Conclusions: 99mTc-IDA SPECT obtained during RT could be used to assess regional hepatic function and helped predict post-RT regional liver function reserve. This could support individualized adaptive radiation treatment strategies to maximize tumor control and minimize the risk of liver damage.« less

  11. Predictive models for regional hepatic function based on 99mTc-IDA SPECT and local radiation dose for physiologic adaptive radiation therapy.

    PubMed

    Wang, Hesheng; Feng, Mary; Frey, Kirk A; Ten Haken, Randall K; Lawrence, Theodore S; Cao, Yue

    2013-08-01

    High-dose radiation therapy (RT) for intrahepatic cancer is limited by the development of liver injury. This study investigated whether regional hepatic function assessed before and during the course of RT using 99mTc-labeled iminodiacetic acid (IDA) single photon emission computed tomography (SPECT) could predict regional liver function reserve after RT. Fourteen patients treated with RT for intrahepatic cancers underwent dynamic 99mTc-IDA SPECT scans before RT, during, and 1 month after completion of RT. Indocyanine green (ICG) tests, a measure of overall liver function, were performed within 1 day of each scan. Three-dimensional volumetric hepatic extraction fraction (HEF) images of the liver were estimated by deconvolution analysis. After coregistration of the CT/SPECT and the treatment planning CT, HEF dose-response functions during and after RT were generated. The volumetric mean of the HEFs in the whole liver was correlated with ICG clearance time. Three models, dose, priori, and adaptive models, were developed using multivariate linear regression to assess whether the regional HEFs measured before and during RT helped predict regional hepatic function after RT. The mean of the volumetric liver HEFs was significantly correlated with ICG clearance half-life time (r=-0.80, P<.0001), for all time points. Linear correlations between local doses and regional HEFs 1 month after RT were significant in 12 patients. In the priori model, regional HEF after RT was predicted by the planned dose and regional HEF assessed before RT (R=0.71, P<.0001). In the adaptive model, regional HEF after RT was predicted by regional HEF reassessed during RT and the remaining planned local dose (R=0.83, P<.0001). 99mTc-IDA SPECT obtained during RT could be used to assess regional hepatic function and helped predict post-RT regional liver function reserve. This could support individualized adaptive radiation treatment strategies to maximize tumor control and minimize the risk of liver damage. Published by Elsevier Inc.

  12. An adaptation of the theory of interpersonal behaviour to the study of telemedicine adoption by physicians.

    PubMed

    Gagnon, Marie-Pierre; Godin, Gaston; Gagné, Camille; Fortin, Jean-Paul; Lamothe, Lise; Reinharz, Daniel; Cloutier, Alain

    2003-09-01

    Physicians' acceptance of telemedicine constitutes a prerequisite for its diffusion on a national scale. Based upon the Theory of Interpersonal Behavior, this study was aimed at assessing the predictors of physicians' intention to use telemedicine in their clinical practice. All of the physicians involved in the RQTE, the extended provincial telemedicine network of Quebec (Canada) were mailed a questionnaire to identify the psychosocial determinants of their intention to adopt telemedicine. Confirmatory factor analysis (CFA) was performed to assess the measurement model and structural equation modelling (SEM) was applied to test the theoretical model. The adapted theoretical model explained 81% (P<0.001) of variance in physicians' intention to use telehealth. The main predictors of intentions were a composite normative factor, comprising personal as well as social norms (beta=1.08; P<0.001) and self identity (beta=-0.33; P<0.001). Thus, physicians who perceived professional and social responsibilities regarding adoption of telehealth in their clinical practice had stronger intention to use this technology. However, it is likely that personal identity had a suppression effect in the regression equation, indicating that physicians' intention to use telemedicine was better predicted if their self-perception as telemedicine users was considered. These results have several implications at the theoretical and practical levels that are discussed in this paper.

  13. Adolescent Attachment Security, Family Functioning, and Suicide Attempts

    PubMed Central

    Sheftall, Arielle H.; Mathias, Charles W.; Furr, R. Michael; Dougherty, Donald M.

    2013-01-01

    Theories of suicidal behavior suggest that the desire to die can arise from disruption of interpersonal relationships. Suicide research has typically studied this from the individual's perspective of the quality/frequency of their social interactions; however, the field of attachment may offer another perspective on understanding an individual’s social patterns and suicide risk. This study examined attachment along with broader family functioning (family adaptability and cohesion) among 236 adolescent psychiatric inpatients with (n = 111) and without (n = 125) histories of suicide attempts. On average, adolescents were 14 years of age and Hispanic (69%). Compared to those without suicide attempts, adolescent attempters had lower self-reported maternal and paternal attachment and lower familial adaptability and cohesion. When comparing all 3 types of attachment simultaneously in the logistic regression model predicting suicide attempt status, paternal attachment was the only significant predictor. Suicide attempt group was also significantly predicted by self-rated Cohesion and Adaptability; neither of the parent ratings of family functioning were significant predictors. These findings are consistent with the predictions of the Interpersonal Theory of Suicide about social functioning and support the efforts to develop attachment-based interventions as a novel route towards suicide prevention. PMID:23560608

  14. Adolescent attachment security, family functioning, and suicide attempts.

    PubMed

    Sheftall, Arielle H; Mathias, Charles W; Furr, R Michael; Dougherty, Donald M

    2013-01-01

    Theories of suicidal behavior suggest that the desire to die can arise from disruption of interpersonal relationships. Suicide research has typically studied this from the individual's perspective of the quality/frequency of their social interactions; however, the field of attachment may offer another perspective on understanding an individual's social patterns and suicide risk. This study examined attachment along with broader family functioning (family adaptability and cohesion) among 236 adolescent psychiatric inpatients with (n = 111) and without (n = 125) histories of suicide attempts. On average, adolescents were 14 years of age and Hispanic (69%). Compared to those without suicide attempts, adolescent attempters had lower self-reported maternal and paternal attachment and lower familial adaptability and cohesion. When comparing all three types of attachment simultaneously in the logistic regression model predicting suicide attempt status, paternal attachment was the only significant predictor. Suicide attempt group was also significantly predicted by self-rated Cohesion and Adaptability; neither of the parent ratings of family functioning were significant predictors. These findings are consistent with the predictions of the Interpersonal Theory of Suicide about social functioning and support the efforts to develop attachment-based interventions as a novel route towards suicide prevention.

  15. Tumor dose-volume response in image-guided adaptive brachytherapy for cervical cancer: A meta-regression analysis.

    PubMed

    Mazeron, Renaud; Castelnau-Marchand, Pauline; Escande, Alexandre; Rivin Del Campo, Eleonor; Maroun, Pierre; Lefkopoulos, Dimitri; Chargari, Cyrus; Haie-Meder, Christine

    2016-01-01

    Image-guided adaptive brachytherapy is a high precision technique that allows dose escalation and adaptation to tumor response. Two monocentric studies reported continuous dose-volume response relationships, however, burdened by large confidence intervals. The aim was to refine these estimations by performing a meta-regression analysis based on published series. Eligibility was limited to series reporting dosimetric parameters according to the Groupe Européen de Curiethérapie-European SocieTy for Radiation Oncology recommendations. The local control rates reported at 2-3 years were confronted to the mean D90 clinical target volume (CTV) in 2-Gy equivalent using the probit model. The impact of each series on the relationships was pondered according to the number of patients reported. An exhaustive literature search retrieved 13 series reporting on 1299 patients. D90 high-risk CTV ranged from 70.9 to 93.1 Gy. The probit model showed a significant correlation between the D90 and the probability of achieving local control (p < 0.0001). The D90 associated to a 90% probability of achieving local control was 81.4 Gy (78.3-83.8 Gy). The planning aim of 90 Gy corresponded to a 95.0% probability (92.8-96.3%). For the intermediate-risk CTV, less data were available, with 873 patients from eight institutions. Reported mean D90 intermediate-risk CTV ranged from 61.7 to 69.1 Gy. A significant dose-volume effect was observed (p = 0.009). The D90 of 60 Gy was associated to a 79.4% (60.2-86.0%) local control probability. Based on published data from a high number of patients, significant dose-volume effect relationships were confirmed and refined between the D90 of both CTV and the probability of achieving local control. Further studies based on individual data are required to develop nomograms including nondosimetric prognostic criteria. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  16. Flood Damages- savings potential for Austrian municipalities and evidence of adaptation

    NASA Astrophysics Data System (ADS)

    Unterberger, C.

    2016-12-01

    Recent studies show that the number of extreme precipitation events has increased globally and will continue to do so in the future. These observations are particularly true for central, northern and north-eastern Europe. These changes in the patterns of extreme events have direct repercussions for policy makers. Rojas et al. (2013) find that until 2080, annual damages could increase by a factor of 17 (from €5,5 bn/year today to € 98 bn/year in 2080) in the event that no adaptation measures are taken. Steininger et al. (2015) find that climate and weather induced extreme events account for an annual current welfare loss of about € 1 billion in Austria. As a result, policy makers will need to understand the interaction between hazard, exposure and vulnerability, with the goal of achieving flood risk reduction. Needed is a better understanding of where exposure, vulnerability and eventually flood risk are highest, i.e. where to reduce risk first and which factors drive existing flood risk. This article analyzes direct flood losses as reported by 1153 Austrian municipalities between 2005 and 2013. To achieve comparability between flood damages and municipalities' ordinary spending, a "vulnerability threshold" is introduced suggesting that flood damages should be lower than 5% of municipalities' average annual ordinary spending. It is found that the probability that flood damages exceed this vulnerability threshold is 12%. To provide a reliable estimate for that exceedance probability the joint distribution of damages and spending is modelled by means of a copula approach. Based on the joint distribution, a Monte Carlo simulation is conducted to derive uncertainty ranges for the exceedance probability. To analyze the drivers of flood damages and the effect they have on municipalities' spending, two linear regression models are estimated. Hereby obtained results suggest that damages increase significantly for those municipalities located along the shores of the river Danube and decrease significantly for municipalities that experienced floods in the past- indicating successful adaptation. As for the relationship between flood damages and municipalities' spending, the regression results indicate that flood damages have a significant positive impact.

  17. Integrating remote sensing with species distribution models; Mapping tamarisk invasions using the Software for Assisted Habitat Modeling (SAHM)

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-01-01

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  18. Integrating Remote Sensing with Species Distribution Models; Mapping Tamarisk Invasions Using the Software for Assisted Habitat Modeling (SAHM).

    PubMed

    West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-10-11

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  19. Predictions of heading date in bread wheat (Triticum aestivum L.) using QTL-based parameters of an ecophysiological model

    PubMed Central

    Bogard, Matthieu; Ravel, Catherine; Paux, Etienne; Bordes, Jacques; Balfourier, François; Chapman, Scott C.; Le Gouis, Jacques; Allard, Vincent

    2014-01-01

    Prediction of wheat phenology facilitates the selection of cultivars with specific adaptations to a particular environment. However, while QTL analysis for heading date can identify major genes controlling phenology, the results are limited to the environments and genotypes tested. Moreover, while ecophysiological models allow accurate predictions in new environments, they may require substantial phenotypic data to parameterize each genotype. Also, the model parameters are rarely related to all underlying genes, and all the possible allelic combinations that could be obtained by breeding cannot be tested with models. In this study, a QTL-based model is proposed to predict heading date in bread wheat (Triticum aestivum L.). Two parameters of an ecophysiological model (V sat and P base, representing genotype vernalization requirements and photoperiod sensitivity, respectively) were optimized for 210 genotypes grown in 10 contrasting location × sowing date combinations. Multiple linear regression models predicting V sat and P base with 11 and 12 associated genetic markers accounted for 71 and 68% of the variance of these parameters, respectively. QTL-based V sat and P base estimates were able to predict heading date of an independent validation data set (88 genotypes in six location × sowing date combinations) with a root mean square error of prediction of 5 to 8.6 days, explaining 48 to 63% of the variation for heading date. The QTL-based model proposed in this study may be used for agronomic purposes and to assist breeders in suggesting locally adapted ideotypes for wheat phenology. PMID:25148833

  20. Modelling landscape change in paddy fields using logistic regression and GIS

    NASA Astrophysics Data System (ADS)

    Franjaya, E. E.; Syartinilia; Setiawan, Y.

    2018-05-01

    Paddy field in karawang district, as an important agricultural land in west java, has been decreased since 1994. From previous study, paddy fields dominantly turned into built area. The changes were almost occured in the middle area of the district where roadways, industries, settlements, and commercial buildings were existed. These were estimated as driving forces. But, we still need to prove it. This study aimed to construct the paddy field probability change model, subsequently the driving forces will be obtained. GIS combined with logistic regression using environmental variables were used as main method in this study. Ten environmental variables were elevation 0–500 m, elevation>500 m, slope<8%, slope>8%, CBD, build up area, river, irrigation, toll and national roadway, and collector and local roadway. The result indicated that four variables were significantly played as driving forces (slope>8%, CBD area, build up area, and collector and local roadway). Paddy field has high, medium, and low probability to change which covered about 27.8%, 7.8%, and 64.4% area in Karawang respectively. Based on landscape ecology, the recommendation that suitable with landscape change is adaptive management.

  1. Bayesian quantitative precipitation forecasts in terms of quantiles

    NASA Astrophysics Data System (ADS)

    Bentzien, Sabrina; Friederichs, Petra

    2014-05-01

    Ensemble prediction systems (EPS) for numerical weather predictions on the mesoscale are particularly developed to obtain probabilistic guidance for high impact weather. An EPS not only issues a deterministic future state of the atmosphere but a sample of possible future states. Ensemble postprocessing then translates such a sample of forecasts into probabilistic measures. This study focus on probabilistic quantitative precipitation forecasts in terms of quantiles. Quantiles are particular suitable to describe precipitation at various locations, since no assumption is required on the distribution of precipitation. The focus is on the prediction during high-impact events and related to the Volkswagen Stiftung funded project WEX-MOP (Mesoscale Weather Extremes - Theory, Spatial Modeling and Prediction). Quantile forecasts are derived from the raw ensemble and via quantile regression. Neighborhood method and time-lagging are effective tools to inexpensively increase the ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Since an EPS provides a large amount of potentially informative predictors, a variable selection is required in order to obtain a stable statistical model. A Bayesian formulation of quantile regression allows for inference about the selection of predictive covariates by the use of appropriate prior distributions. Moreover, the implementation of an additional process layer for the regression parameters accounts for spatial variations of the parameters. Bayesian quantile regression and its spatially adaptive extension is illustrated for the German-focused mesoscale weather prediction ensemble COSMO-DE-EPS, which runs (pre)operationally since December 2010 at the German Meteorological Service (DWD). Objective out-of-sample verification uses the quantile score (QS), a weighted absolute error between quantile forecasts and observations. The QS is a proper scoring function and can be decomposed into reliability, resolutions and uncertainty parts. A quantile reliability plot gives detailed insights in the predictive performance of the quantile forecasts.

  2. Application of Multiregressive Linear Models, Dynamic Kriging Models and Neural Network Models to Predictive Maintenance of Hydroelectric Power Systems

    NASA Astrophysics Data System (ADS)

    Lucifredi, A.; Mazzieri, C.; Rossi, M.

    2000-05-01

    Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.

  3. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    PubMed

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  4. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented.

  5. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  6. Sharpening method of satellite thermal image based on the geographical statistical model

    NASA Astrophysics Data System (ADS)

    Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng

    2016-04-01

    To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.

  7. Study of Personnel Attrition and Revocation within U.S. Marine Corps Air Traffic Control Specialties

    DTIC Science & Technology

    2012-03-01

    Entrance Processing Stations (MEPS) and recruit depots, to include non-cognitive testing, such as Navy Computer Adaptive Personality Scales ( NCAPS ...Revocation, Selection, MOS, Regression, Probit, dProbit, STATA, Statistics, Marginal Effects, ASVAB, AFQT, Composite Scores, Screening, NCAPS 15. NUMBER...Navy Computer Adaptive Personality Scales ( NCAPS ), during recruitment. It is also recommended that an economic analysis be conducted comparing the

  8. Modelling exploration of non-stationary hydrological system

    NASA Astrophysics Data System (ADS)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2015-04-01

    Traditional hydrological modelling assumes that the catchment does not change with time (i.e., stationary conditions) which means the model calibrated for the historical period is valid for the future period. However, in reality, due to change of climate and catchment conditions this stationarity assumption may not be valid in the future. It is a challenge to make the hydrological model adaptive to the future climate and catchment conditions that are not observable at the present time. In this study a lumped conceptual rainfall-runoff model called IHACRES was applied to a catchment in southwest England. Long observation data from 1961 to 2008 were used and seasonal calibration (in this study only summer period is further explored because it is more sensitive to climate and land cover change than the other three seasons) has been done since there are significant seasonal rainfall patterns. We expect that the model performance can be improved by calibrating the model based on individual seasons. The data is split into calibration and validation periods with the intention of using the validation period to represent the future unobserved situations. The success of the non-stationary model will depend not only on good performance during the calibration period but also the validation period. Initially, the calibration is based on changing the model parameters with time. Methodology is proposed to adapt the parameters using the step forward and backward selection schemes. However, in the validation both the forward and backward multiple parameter changing models failed. One problem is that the regression with time is not reliable since the trend may not be in a monotonic linear relationship with time. The second issue is that changing multiple parameters makes the selection process very complex which is time consuming and not effective in the validation period. As a result, two new concepts are explored. First, only one parameter is selected for adjustment while the other parameters are set as constant. Secondly, regression is made against climate condition instead of against time. It has been found that such a new approach is very effective and this non-stationary model worked very well both in the calibration and validation period. Although the catchment is specific in southwest England and the data are for only the summer period, the methodology proposed in this study is general and applicable to other catchments. We hope this study will stimulate the hydrological community to explore a variety of sites so that valuable experiences and knowledge could be gained to improve our understanding of such a complex modelling issue in climate change impact assessment.

  9. Dynamic filtering improves attentional state prediction with fNIRS

    PubMed Central

    Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.

    2016-01-01

    Brain activity can predict a person’s level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise – thereby increasing such state prediction accuracy – remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% ± 6% versus 72% ± 15%). PMID:27231602

  10. Different Adjuvants Induce Common Innate Pathways That Are Associated with Enhanced Adaptive Responses against a Model Antigen in Humans

    PubMed Central

    Burny, Wivine; Callegaro, Andrea; Bechtold, Viviane; Clement, Frédéric; Delhaye, Sophie; Fissette, Laurence; Janssens, Michel; Leroux-Roels, Geert; Marchant, Arnaud; van den Berg, Robert A.; Garçon, Nathalie; van der Most, Robbert; Didierlaurent, Arnaud M.; Bechtold, Viviane

    2017-01-01

    To elucidate the role of innate responses in vaccine immunogenicity, we compared early responses to hepatitis B virus (HBV) surface antigen (HBsAg) combined with different Adjuvant Systems (AS) in healthy HBV-naïve adults, and included these parameters in multi-parametric models of adaptive responses. A total of 291 participants aged 18–45 years were randomized 1:1:1:1:1 to receive HBsAg with AS01B, AS01E, AS03, AS04, or Alum/Al(OH)3 at days 0 and 30 (ClinicalTrials.gov: NCT00805389). Blood protein, cellular, and mRNA innate responses were assessed at early time-points and up to 7 days after vaccination, and used with reactogenicity symptoms in linear regression analyses evaluating their correlation with HBs-specific CD4+ T-cell and antibody responses at day 44. All AS induced transient innate responses, including interleukin (IL)-6 and C-reactive protein (CRP), mostly peaking at 24 h post-vaccination and subsiding to baseline within 1–3 days. After the second but not the first injection, median interferon (IFN)-γ levels were increased in the AS01B group, and IFN-γ-inducible protein-10 levels and IFN-inducible genes upregulated in the AS01 and AS03 groups. No distinct marker or signature was specific to one particular AS. Innate profiles were comparable between AS01B, AS01E, and AS03 groups, and between AS04 and Alum groups. AS group rankings within adaptive and innate response levels and reactogenicity prevalence were similar (AS01B ≥ AS01E > AS03 > AS04 > Alum), suggesting an association between magnitudes of inflammatory and vaccine responses. Modeling revealed associations between adaptive responses and specific traits of the innate response post-dose 2 (activation of the IFN-signaling pathway, CRP and IL-6 responses). In conclusion, the ability of AS01 and AS03 to enhance adaptive responses to co-administered HBsAg is likely linked to their capacity to activate innate immunity, particularly the IFN-signaling pathway. PMID:28855902

  11. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  12. Impacts of ozone air pollution and temperature extremes on crop yields: Spatial variability, adaptation and implications for future food security

    NASA Astrophysics Data System (ADS)

    Tai, Amos P. K.; Val Martin, Maria

    2017-11-01

    Ozone air pollution and climate change pose major threats to global crop production, with ramifications for future food security. Previous studies of ozone and warming impacts on crops typically do not account for the strong ozone-temperature correlation when interpreting crop-ozone or crop-temperature relationships, or the spatial variability of crop-to-ozone sensitivity arising from varietal and environmental differences, leading to potential biases in their estimated crop losses. Here we develop an empirical model, called the partial derivative-linear regression (PDLR) model, to estimate the spatial variations in the sensitivities of wheat, maize and soybean yields to ozone exposures and temperature extremes in the US and Europe using a composite of multidecadal datasets, fully correcting for ozone-temperature covariation. We find generally larger and more spatially varying sensitivities of all three crops to ozone exposures than are implied by experimentally derived concentration-response functions used in most previous studies. Stronger ozone tolerance is found in regions with high ozone levels and high consumptive crop water use, reflecting the existence of spatial adaptation and effect of water constraints. The spatially varying sensitivities to temperature extremes also indicate stronger heat tolerance in crops grown in warmer regions. The spatial adaptation of crops to ozone and temperature we find can serve as a surrogate for future adaptation. Using the PDLR-derived sensitivities and 2000-2050 ozone and temperature projections by the Community Earth System Model, we estimate that future warming and unmitigated ozone pollution can combine to cause an average decline in US wheat, maize and soybean production by 13%, 43% and 28%, respectively, and a smaller decline for European crops. Aggressive ozone regulation is shown to offset such decline to various extents, especially for wheat. Our findings demonstrate the importance of considering ozone regulation as well as ozone and climate change adaptation (e.g., selecting heat- and ozone-tolerant cultivars, irrigation) as possible strategies to enhance future food security in response to imminent environmental threats.

  13. Coping and mental health outcomes among Sierra Leonean war-affected youth: Results from a longitudinal study.

    PubMed

    Sharma, Manasi; Fine, Shoshanna L; Brennan, Robert T; Betancourt, Theresa S

    2017-02-01

    This study explored how coping with war-related traumatic events in Sierra Leone impacted mental health outcomes among 529 youth (aged 10-17 at baseline; 25% female) using longitudinal data from three time points (Time 1 in 2002, Time 2 in 2004, and Time 3 in 2008). We examined two types of coping items (approach and avoidance); used multiple regression models to test their relations with long-term mental health outcomes (internalizing behaviors, externalizing behaviors, adaptive/prosocial behaviors, and posttraumatic stress symptoms); and used mediation analyses to test whether coping explained the relation between previous war exposures (being raped, death of parent(s), or killing/injuring someone during the war) and those outcomes. We found that avoidance coping items were associated with lower internalizing and posttraumatic stress behaviors at Time 3, and provided some evidence of mediating the relation between death of parent(s) during the war and the two outcomes mentioned above. Approach coping was associated with higher Time 3 adaptive/prosocial behaviors, whereas avoidance coping was associated with lower Time 3 adaptive/prosocial behaviors. Avoidance coping may be a protective factor against mental illness, whereas approach coping may be a promotive factor for adaptive/prosocial behaviors in war-affected societies. This study has important implications for designing and implementing mental health interventions for youth in postconflict settings.

  14. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer

    NASA Astrophysics Data System (ADS)

    Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-01

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  15. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer.

    PubMed

    Bondar, M Luiza; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-07

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  16. Genome-environment association study suggests local adaptation to climate at the regional scale in Fagus sylvatica.

    PubMed

    Pluess, Andrea R; Frank, Aline; Heiri, Caroline; Lalagüe, Hadrien; Vendramin, Giovanni G; Oddou-Muratorio, Sylvie

    2016-04-01

    The evolutionary potential of long-lived species, such as forest trees, is fundamental for their local persistence under climate change (CC). Genome-environment association (GEA) analyses reveal if species in heterogeneous environments at the regional scale are under differential selection resulting in populations with potential preadaptation to CC within this area. In 79 natural Fagus sylvatica populations, neutral genetic patterns were characterized using 12 simple sequence repeat (SSR) markers, and genomic variation (144 single nucleotide polymorphisms (SNPs) out of 52 candidate genes) was related to 87 environmental predictors in the latent factor mixed model, logistic regressions and isolation by distance/environmental (IBD/IBE) tests. SSR diversity revealed relatedness at up to 150 m intertree distance but an absence of large-scale spatial genetic structure and IBE. In the GEA analyses, 16 SNPs in 10 genes responded to one or several environmental predictors and IBE, corrected for IBD, was confirmed. The GEA often reflected the proposed gene functions, including indications for adaptation to water availability and temperature. Genomic divergence and the lack of large-scale neutral genetic patterns suggest that gene flow allows the spread of advantageous alleles in adaptive genes. Thereby, adaptation processes are likely to take place in species occurring in heterogeneous environments, which might reduce their regional extinction risk under CC. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  17. Longitudinal social cognitive influences on physical activity and sedentary time in Hispanic breast cancer survivors.

    PubMed

    Mama, Scherezade K; Song, Jaejoon; Ortiz, Alexis; Tirado-Gomez, Maribel; Palacios, Cristina; Hughes, Daniel C; Basen-Engquist, Karen

    2017-02-01

    This study evaluated the effect of two home-based exercise interventions (one culturally adapted and one standard) on changes in social cognitive theory (SCT) variables, physical activity (PA), and sedentary time (ST), and determined the association between changes in SCT variables and changes in PA and ST in Hispanic breast cancer survivors. Project VIVA! was a 16-week randomized controlled pilot study to test the effectiveness and feasibility of a culturally adapted exercise intervention for Mexican American and Puerto Rican breast cancer survivors in Houston, Texas and San Juan, Puerto Rico, respectively. Women (N = 89) completed questionnaires on SCT variables, PA, and ST and were then randomized to a 16-week culturally adapted exercise program, a non-culturally adapted standard exercise intervention or a wait-list control group. Multiple regression models were used to determine associations between changes in SCT variables and changes in PA and ST. Participants were in their late 50s (58.5 ± 9.2 years) and obese (31.0 ± 6.5 kg/m 2 ). Women reported doing roughly 34.5 min/day of PA and spending over 11 h/day in sedentary activities. Across groups, women reported significant increases in exercise self-efficacy and moderate-intensity, vigorous-intensity, and total PA from baseline to follow-up (p < 0.05). Increased social support from family was associated with increases in vigorous-intensity PA. Increases in social modeling were associated with increases in moderate-intensity and total PA and with decreases in ST from baseline to follow-up (p < 0.05). Hispanic cancer survivors benefit from PA interventions that focus on increasing social support from family and friends and social modeling. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Meteorological influence on predicting surface SO2 concentration from satellite remote sensing in Shanghai, China.

    PubMed

    Xue, Dan; Yin, Jingyuan

    2014-05-01

    In this study, we explored the potential applications of the Ozone Monitoring Instrument (OMI) satellite sensor in air pollution research. The OMI planetary boundary layer sulfur dioxide (SO2_PBL) column density and daily average surface SO2 concentration of Shanghai from 2004 to 2012 were analyzed. After several consecutive years of increase, the surface SO2 concentration finally declined in 2007. It was higher in winter than in other seasons. The coefficient between daily average surface SO2 concentration and SO2_PBL was only 0.316. But SO2_PBL was found to be a highly significant predictor of the surface SO2 concentration using the simple regression model. Five meteorological factors were considered in this study, among them, temperature, dew point, relative humidity, and wind speed were negatively correlated with surface SO2 concentration, while pressure was positively correlated. Furthermore, it was found that dew point was a more effective predictor than temperature. When these meteorological factors were used in multiple regression, the determination coefficient reached 0.379. The relationship of the surface SO2 concentration and meteorological factors was seasonally dependent. In summer and autumn, the regression model performed better than in spring and winter. The surface SO2 concentration predicting method proposed in this study can be easily adapted for other regions, especially most useful for those having no operational air pollution forecasting services or having sparse ground monitoring networks.

  19. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    NASA Astrophysics Data System (ADS)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  20. On Bayesian methods of exploring qualitative interactions for targeted treatment.

    PubMed

    Chen, Wei; Ghosh, Debashis; Raghunathan, Trivellore E; Norkin, Maxim; Sargent, Daniel J; Bepler, Gerold

    2012-12-10

    Providing personalized treatments designed to maximize benefits and minimizing harms is of tremendous current medical interest. One problem in this area is the evaluation of the interaction between the treatment and other predictor variables. Treatment effects in subgroups having the same direction but different magnitudes are called quantitative interactions, whereas those having opposite directions in subgroups are called qualitative interactions (QIs). Identifying QIs is challenging because they are rare and usually unknown among many potential biomarkers. Meanwhile, subgroup analysis reduces the power of hypothesis testing and multiple subgroup analyses inflate the type I error rate. We propose a new Bayesian approach to search for QI in a multiple regression setting with adaptive decision rules. We consider various regression models for the outcome. We illustrate this method in two examples of phase III clinical trials. The algorithm is straightforward and easy to implement using existing software packages. We provide a sample code in Appendix A. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    NASA Technical Reports Server (NTRS)

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  2. Similar estimates of temperature impacts on global wheat yield by three independent methods

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  3. Predicting dimensions of personality disorder from domains and facets of the Five-Factor Model.

    PubMed

    Reynolds, S K; Clark, L A

    2001-04-01

    We compared the utility of several trait models for describing personality disorder in a heterogeneous clinical sample (N = 94). Participants completed the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1993b), a self-report measure that assesses traits relevant to personality disorder, and two measures of the Five-Factor Model: the Revised NEO Personality Inventory (NEO-PI-R; Costa and McCrae, 1992) and the Big Five Inventory (BFI; John, Donahue, & Kentle, 1991). Regression analyses indicated substantial overlap between the SNAP scales and the NEO-PI-R facets. In addition, use of the NEO-PI-R facets afforded substantial improvement over the Five-Factor Model domains in predicting interview-based ratings of DSM-IV personality disorder (American Psychiatric Association, 1994), such that the NEO facets and the SNAP scales demonstrated roughly equivalent levels of predictive power. Results support assessment of the full range of NEO-PI-R facets over the Five-Factor Model domains for both research and clinical use.

  4. COINSTAC: Decentralizing the future of brain imaging analysis

    PubMed Central

    Ming, Jing; Verner, Eric; Sarwate, Anand; Kelly, Ross; Reed, Cory; Kahleck, Torran; Silva, Rogers; Panta, Sandeep; Turner, Jessica; Plis, Sergey; Calhoun, Vince

    2017-01-01

    In the era of Big Data, sharing neuroimaging data across multiple sites has become increasingly important. However, researchers who want to engage in centralized, large-scale data sharing and analysis must often contend with problems such as high database cost, long data transfer time, extensive manual effort, and privacy issues for sensitive data. To remove these barriers to enable easier data sharing and analysis, we introduced a new, decentralized, privacy-enabled infrastructure model for brain imaging data called COINSTAC in 2016. We have continued development of COINSTAC since this model was first introduced. One of the challenges with such a model is adapting the required algorithms to function within a decentralized framework. In this paper, we report on how we are solving this problem, along with our progress on several fronts, including additional decentralized algorithms implementation, user interface enhancement, decentralized regression statistic calculation, and complete pipeline specifications. PMID:29123643

  5. Robust fitting for neuroreceptor mapping.

    PubMed

    Chang, Chung; Ogden, R Todd

    2009-03-15

    Among many other uses, positron emission tomography (PET) can be used in studies to estimate the density of a neuroreceptor at each location throughout the brain by measuring the concentration of a radiotracer over time and modeling its kinetics. There are a variety of kinetic models in common usage and these typically rely on nonlinear least-squares (LS) algorithms for parameter estimation. However, PET data often contain artifacts (such as uncorrected head motion) and so the assumptions on which the LS methods are based may be violated. Quantile regression (QR) provides a robust alternative to LS methods and has been used successfully in many applications. We consider fitting various kinetic models to PET data using QR and study the relative performance of the methods via simulation. A data adaptive method for choosing between LS and QR is proposed and the performance of this method is also studied.

  6. Social modernization and the increase in the divorce rate.

    PubMed

    Esser, H

    1993-03-01

    The author develops a micro-model of marital interactions that is used to analyze factors affecting the divorce rate in modern industrialized societies. The core of the model is the concept of production of marital gain and mutual control of this production. "The increase of divorce rates, then, is explained by a steady decrease of institutional and social embeddedness, which helps to solve this kind of an 'assurance game.' The shape of the individual risk is explained by the typical form of change of the 'production functions' of marriages within the first period of adaptation. The inconsistent results concerning womens' labor market participation in linear regression models are explained as a consequence of the (theoretical and statistical) 'interaction' of decreases in embeddedness and increases in external alternatives for women." Comments are included by Karl-Dieter Opp (pp. 278-82) and Ulrich Witt (pp. 283-5). excerpt

  7. [Economic Evaluation of Integrated Care Systems - Scientific Standard Specifications, Challenges, Best Practice Model].

    PubMed

    Pimperl, A; Schreyögg, J; Rothgang, H; Busse, R; Glaeske, G; Hildebrandt, H

    2015-12-01

     Transparency of economic performance of integrated care systems (IV) is a basic requirement for the acceptance and further development of integrated care. Diverse evaluation methods are used but are seldom openly discussed because of the proprietary nature of the different business models. The aim of this article is to develop a generic model for measuring economic performance of IV interventions.  A catalogue of five quality criteria is used to discuss different evaluation methods -(uncontrolled before-after-studies, control group-based approaches, regression models). On this -basis a best practice model is proposed.  A regression model based on the German morbidity-based risk structure equalisation scheme (MorbiRSA) has some benefits in comparison to the other methods mentioned. In particular it requires less resources to be implemented and offers advantages concerning the relia-bility and the transparency of the method (=important for acceptance). Also validity is sound. Although RCTs and - also to a lesser -extent - complex difference-in-difference matching approaches can lead to a higher validity of the results, their feasibility in real life settings is limited due to economic and practical reasons. That is why central criticisms of a MorbiRSA-based model were addressed, adaptions proposed and incorporated in a best practice model: Population-oriented morbidity adjusted margin improvement model (P-DBV(MRSA)).  The P-DBV(MRSA) approach may be used as a standardised best practice model for the economic evaluation of IV. Parallel to the proposed approach for measuring economic performance a balanced, quality-oriented performance measurement system should be introduced. This should prevent incentivising IV-players to undertake short-term cost cutting at the expense of quality. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Improved animal models for testing gene therapy for atherosclerosis.

    PubMed

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A

    2014-04-01

    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long-term therapy from vascular endothelium without accelerating atherosclerotic disease.

  9. A Baseline Patient Model to Support Testing of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Perkusich, Mirko; Almeida, Hyggo O; Perkusich, Angelo; Lima, Mateus A M; Gorgônio, Kyller C

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are currently a trending topic of research. The main challenges are related to the integration and interoperability of connected medical devices, patient safety, physiologic closed-loop control, and the verification and validation of these systems. In this paper, we focus on patient safety and MCPS validation. We present a formal patient model to be used in health care systems validation without jeopardizing the patient's health. To determine the basic patient conditions, our model considers the four main vital signs: heart rate, respiratory rate, blood pressure and body temperature. To generate the vital signs we used regression models based on statistical analysis of a clinical database. Our solution should be used as a starting point for a behavioral patient model and adapted to specific clinical scenarios. We present the modeling process of the baseline patient model and show its evaluation. The conception process may be used to build different patient models. The results show the feasibility of the proposed model as an alternative to the immediate need for clinical trials to test these medical systems.

  10. Computer adaptive test approach to the assessment of children and youth with brachial plexus birth palsy.

    PubMed

    Mulcahey, M J; Merenda, Lisa; Tian, Feng; Kozin, Scott; James, Michelle; Gogola, Gloria; Ni, Pengsheng

    2013-01-01

    This study examined the psychometric properties of item pools relevant to upper-extremity function and activity performance and evaluated simulated 5-, 10-, and 15-item computer adaptive tests (CATs). In a multicenter, cross-sectional study of 200 children and youth with brachial plexus birth palsy (BPBP), parents responded to upper-extremity (n = 52) and activity (n = 34) items using a 5-point response scale. We used confirmatory and exploratory factor analysis, ordinal logistic regression, item maps, and standard errors to evaluate the psychometric properties of the item banks. Validity was evaluated using analysis of variance and Pearson correlation coefficients. Results show that the two item pools have acceptable model fit, scaled well for children and youth with BPBP, and had good validity, content range, and precision. Simulated CATs performed comparably to the full item banks, suggesting that a reduced number of items provide similar information to the entire set of items. Copyright © 2013 by the American Occupational Therapy Association, Inc.

  11. A practical guide to environmental association analysis in landscape genomics.

    PubMed

    Rellstab, Christian; Gugerli, Felix; Eckert, Andrew J; Hancock, Angela M; Holderegger, Rolf

    2015-09-01

    Landscape genomics is an emerging research field that aims to identify the environmental factors that shape adaptive genetic variation and the gene variants that drive local adaptation. Its development has been facilitated by next-generation sequencing, which allows for screening thousands to millions of single nucleotide polymorphisms in many individuals and populations at reasonable costs. In parallel, data sets describing environmental factors have greatly improved and increasingly become publicly accessible. Accordingly, numerous analytical methods for environmental association studies have been developed. Environmental association analysis identifies genetic variants associated with particular environmental factors and has the potential to uncover adaptive patterns that are not discovered by traditional tests for the detection of outlier loci based on population genetic differentiation. We review methods for conducting environmental association analysis including categorical tests, logistic regressions, matrix correlations, general linear models and mixed effects models. We discuss the advantages and disadvantages of different approaches, provide a list of dedicated software packages and their specific properties, and stress the importance of incorporating neutral genetic structure in the analysis. We also touch on additional important aspects such as sampling design, environmental data preparation, pooled and reduced-representation sequencing, candidate-gene approaches, linearity of allele-environment associations and the combination of environmental association analyses with traditional outlier detection tests. We conclude by summarizing expected future directions in the field, such as the extension of statistical approaches, environmental association analysis for ecological gene annotation, and the need for replication and post hoc validation studies. © 2015 John Wiley & Sons Ltd.

  12. The role of climate and out-of-Africa migration in the frequencies of risk alleles for 21 human diseases.

    PubMed

    Blair, Lily M; Feldman, Marcus W

    2015-07-14

    Demography and environmental adaptation can affect the global distribution of genetic variants and possibly the distribution of disease. Population heterozygosity of single nucleotide polymorphisms has been shown to decrease strongly with distance from Africa and this has been attributed to the effect of serial founding events during the migration of humans out of Africa. Additionally, population allele frequencies have been shown to change due to environmental adaptation. Here, we investigate the relationship of Out-of-Africa migration and climatic variables to the distribution of risk alleles for 21 diseases. For each disease, we computed the regression of average heterozygosity and average allele frequency of the risk alleles with distance from Africa and 9 environmental variables. We compared these regressions to a null distribution created by regressing statistics for SNPs not associated with disease on distance from Africa and these environmental variables. Additionally, we used Bayenv 2.0 to assess the signal of environmental adaptation associated with individual risk SNPs. For those SNPs in HGDP and HapMap that are risk alleles for type 2 diabetes, we cannot reject that their distribution is as expected from Out-of-Africa migration. However, the allelic statistics for many other diseases correlate more closely with environmental variables than would be expected from the serial founder effect and show signals of environmental adaptation. We report strong environmental interactions with several autoimmune diseases, and note a particularly strong interaction between asthma and summer humidity. Additionally, we identified several risk genes with strong environmental associations. For most diseases, migration does not explain the distribution of risk alleles and the worldwide pattern of allele frequencies for some diseases may be better explained by environmental associations, which suggests that some selection has acted on these diseases.

  13. Skeletal adaptation to intramedullary pressure-induced interstitial fluid flow is enhanced in mice subjected to targeted osteocyte ablation.

    PubMed

    Kwon, Ronald Y; Meays, Diana R; Meilan, Alexander S; Jones, Jeremiah; Miramontes, Rosa; Kardos, Natalie; Yeh, Jiunn-Chern; Frangos, John A

    2012-01-01

    Interstitial fluid flow (IFF) is a potent regulatory signal in bone. During mechanical loading, IFF is generated through two distinct mechanisms that result in spatially distinct flow profiles: poroelastic interactions within the lacunar-canalicular system, and intramedullary pressurization. While the former generates IFF primarily within the lacunar-canalicular network, the latter generates significant flow at the endosteal surface as well as within the tissue. This gives rise to the intriguing possibility that loading-induced IFF may differentially activate osteocytes or surface-residing cells depending on the generating mechanism, and that sensation of IFF generated via intramedullary pressurization may be mediated by a non-osteocytic bone cell population. To begin to explore this possibility, we used the Dmp1-HBEGF inducible osteocyte ablation mouse model and a microfluidic system for modulating intramedullary pressure (ImP) to assess whether structural adaptation to ImP-driven IFF is altered by partial osteocyte depletion. Canalicular convective velocities during pressurization were estimated through the use of fluorescence recovery after photobleaching and computational modeling. Following osteocyte ablation, transgenic mice exhibited severe losses in bone structure and altered responses to hindlimb suspension in a compartment-specific manner. In pressure-loaded limbs, transgenic mice displayed similar or significantly enhanced structural adaptation to Imp-driven IFF, particularly in the trabecular compartment, despite up to ∼50% of trabecular lacunae being uninhabited following ablation. Interestingly, regression analysis revealed relative gains in bone structure in pressure-loaded limbs were correlated with reductions in bone structure in unpressurized control limbs, suggesting that adaptation to ImP-driven IFF was potentiated by increases in osteoclastic activity and/or reductions in osteoblastic activity incurred independently of pressure loading. Collectively, these studies indicate that structural adaptation to ImP-driven IFF can proceed unimpeded following a significant depletion in osteocytes, consistent with the potential existence of a non-osteocytic bone cell population that senses ImP-driven IFF independently and potentially parallel to osteocytic sensation of poroelasticity-derived IFF.

  14. Logic regression and its extensions.

    PubMed

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. A FORTRAN technique for correlating a circular environmental variable with a linear physiological variable in the sugar maple.

    PubMed

    Pease, J M; Morselli, M F

    1987-01-01

    This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.

  16. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  17. Effects of risperidone and parent training on adaptive functioning in children with pervasive developmental disorders and serious behavioral problems

    PubMed Central

    Scahill, Lawrence; McDougle, Christopher J.; Aman, Michael G.; Johnson, Cynthia; Handen, Benjamin; Bearss, Karen; Dziura, James; Butter, Eric; Swiezy, Naomi B.; Arnold, L. Eugene; Stigler, Kimberly A.; Sukhodolsky, Denis D.; Lecavalier, Luc; Pozdol, Stacie L.; Nikolov, Roumen; Ritz, Louise; Hollway, Jill A.; Korzekwa, Patrcia; Gavaletz, Allison; Kohn, Arlene E.; Koenig, Kathleen; Grinnon, Stacie; Mulick, James A.; Yu, Sunkyung; Vitiello, Benedetto

    2012-01-01

    Objective Children with Pervasive Developmental Disorders (PDDs) have deficits in social interaction, delayed communication and repetitive behavior as well as impairments in adaptive functioning. Many children actually show decline in adaptive skills compared to age mates over time. Method This 24-week, three-site, controlled clinical trial randomized 124 children (4 through 13 years of age) with PDDs and serious behavior problems to medication alone (MED; N=49; risperidone 0.5 to 3.5 mg/day (if ineffective, switch to aripiprazole was permitted) or medication plus parent training (PT) (COMB; N=75). Parents of children in COMB received an average of 11.4 PT sessions. Standard scores and Age Equivalent scores on Vineland Adaptive Behavior Scales were the outcome measures of primary interest. Results Seventeen subjects did not have a post-randomization Vineland. Thus, we used a mixed model with outcome conditioned on the baseline Vineland scores. Both groups showed improvement over the 24-week trial on all Vineland domains. Compared to MED, Vineland Socialization and Adaptive Composite Standard scores showed greater improvement in the COMB group (p = 0.01 and 0.05; effect sizes = 0.35.and 0.22, respectively). On Age Equivalent scores, Socialization and Communication domains showed greater improvement in COMB versus MED (p=0.03, 0.05; effect sizes = 0.33 and 0.14 respectively). Using logistic regression, children in the COMB group were twice as likely to make at least 6 months gain (equal to the passage of time) in the Vineland Communication Age Equivalent score compared to MED (p = 0.02). After controlling for IQ, this difference was no longer significant. Conclusion Reduction of serious maladaptive behavior promotes improvement in adaptive behavior. Medication plus PT shows modest additional benefit over medication alone. PMID:22265360

  18. Importance of stability of early living arrangements on behavior outcomes of children with and without prenatal drug exposure.

    PubMed

    Bada, Henrietta S; Langer, John; Twomey, Jean; Bursi, Charlotte; Lagasse, Linda; Bauer, Charles R; Shankaran, Seetha; Lester, Barry M; Higgins, Rosemary; Maza, Penelope L

    2008-06-01

    We evaluated whether living arrangements of children with or without prenatal drug exposure would be associated with their behavior outcomes and adaptive functioning. A total of 1388 children with or without prenatal cocaine or opiate exposure were enrolled in a longitudinal cohort study at 1 month of age, were seen at intervals, tracked over time for their living situation, and evaluated for behavior problems and adaptive functioning at 3 years of age. The Child Behavior Checklist and Vineland Adaptive Behavior Scales were administered. Using multiple regression models, we determined the factors that would predict behavior problems and adaptive functioning. Of the children enrolled, 1092 children were evaluated. Total and externalizing behavior problems T scores of children in relative care were lower (better) than those in parental care; externalizing behavior scores were lower than those in nonrelative care (p < .05). Total behavior problem scores increased 2.3 and 1.3 points, respectively, with each move per year and each year of Child Protective Services involvement. Compared to children in nonrelative care, those in parental or relative care had higher (better) scores in the Vineland Adaptive Behavior Scales total composite (p < .023), communication (p < .045), and daily living (p < .001). Each caretaker change was associated with a decrease of 2.65 and 2.19 points, respectively, in communication and daily living scores. Children's living arrangements were significantly associated with childhood behavior problems and adaptive functioning. The instability of living situation was also a significant predictor of these outcomes. While family preservation continues to be the goal of the child welfare system, expediting decision toward permanency remains paramount once children are placed in foster care.

  19. Nasal variation in relation to high-altitude adaptations among Tibetans and Andeans.

    PubMed

    Butaric, Lauren N; Klocke, Ross P

    2018-05-01

    High-altitude (>2500 m) populations face several pressures, including hypoxia and cold-dry air, resulting in greater respiratory demand to obtain more oxygen and condition inspired air. While cardiovascular and pulmonary adaptations to high-altitude hypoxia have been extensively studied, adaptations of upper-respiratory structures, e.g., nasal cavity, remain untested. This study investigates whether nasal morphology presents adaptations to hypoxic (larger noses) and/or cold-dry (tall/narrow noses) conditions among high-altitude samples. CT scans of two high- and four low-altitude samples from diverse climates were collected (n = 130): high-altitude Tibetans and Peruvians; low-altitude Peruvians, Southern Chinese (temperate), Mongolian-Buriats (cold-dry), and Southeast Asians (hot-wet). Facial and nasal distances were calculated from 3D landmarks placed on digitally-modeled crania. Temperature, precipitation, and barometric pressure data were also obtained. Principal components analysis and analyses of variance primarily indicate size-related differences among the cold-dry (Mongolian-Buriats) and hot-wet (Southeast Asians) adapted groups. Two-block partial least squares (PLS) analysis show weak relationships between size-standardized nasal dimensions and environmental variables. However, among PLS1 (85.90% of covariance), Tibetans display relatively larger nasal cavities related to lower temperatures and barometric pressure; regression analyses also indicate high-altitude Tibetans possess relatively larger internal nasal breadths and heights for their facial size. Overall, nasal differences relate to climate among the cold-dry and hot-wet groups. Specific nasal adaptations were not identified among either Peruvian group, perhaps due to their relatively recent migration history and population structure. However, high-altitude Tibetans seem to exhibit a compromise in nasal morphology, serving in increased oxygen uptake, and air-conditioning processes. © 2018 Wiley Periodicals, Inc.

  20. A regression tree for identifying combinations of fall risk factors associated to recurrent falling: a cross-sectional elderly population-based study.

    PubMed

    Kabeshova, A; Annweiler, C; Fantino, B; Philip, T; Gromov, V A; Launay, C P; Beauchet, O

    2014-06-01

    Regression tree (RT) analyses are particularly adapted to explore the risk of recurrent falling according to various combinations of fall risk factors compared to logistic regression models. The aims of this study were (1) to determine which combinations of fall risk factors were associated with the occurrence of recurrent falls in older community-dwellers, and (2) to compare the efficacy of RT and multiple logistic regression model for the identification of recurrent falls. A total of 1,760 community-dwelling volunteers (mean age ± standard deviation, 71.0 ± 5.1 years; 49.4 % female) were recruited prospectively in this cross-sectional study. Age, gender, polypharmacy, use of psychoactive drugs, fear of falling (FOF), cognitive disorders and sad mood were recorded. In addition, the history of falls within the past year was recorded using a standardized questionnaire. Among 1,760 participants, 19.7 % (n = 346) were recurrent fallers. The RT identified 14 nodes groups and 8 end nodes with FOF as the first major split. Among participants with FOF, those who had sad mood and polypharmacy formed the end node with the greatest OR for recurrent falls (OR = 6.06 with p < 0.001). Among participants without FOF, those who were male and not sad had the lowest OR for recurrent falls (OR = 0.25 with p < 0.001). The RT correctly classified 1,356 from 1,414 non-recurrent fallers (specificity = 95.6 %), and 65 from 346 recurrent fallers (sensitivity = 18.8 %). The overall classification accuracy was 81.0 %. The multiple logistic regression correctly classified 1,372 from 1,414 non-recurrent fallers (specificity = 97.0 %), and 61 from 346 recurrent fallers (sensitivity = 17.6 %). The overall classification accuracy was 81.4 %. Our results show that RT may identify specific combinations of risk factors for recurrent falls, the combination most associated with recurrent falls involving FOF, sad mood and polypharmacy. The FOF emerged as the risk factor strongly associated with recurrent falls. In addition, RT and multiple logistic regression were not sensitive enough to identify the majority of recurrent fallers but appeared efficient in detecting individuals not at risk of recurrent falls.

  1. Seasonally-Dynamic Presence-Only Species Distribution Models for a Cryptic Migratory Bat Impacted by Wind Energy Development

    PubMed Central

    Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution. PMID:26208098

  2. Seasonally-dynamic presence-only species distribution models for a cryptic migratory bat impacted by wind energy development

    USGS Publications Warehouse

    Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  3. Geographic dimensions of heat-related mortality in seven U.S. cities.

    PubMed

    Hondula, David M; Davis, Robert E; Saha, Michael V; Wegner, Carleigh R; Veazey, Lindsay M

    2015-04-01

    Spatially targeted interventions may help protect the public when extreme heat occurs. Health outcome data are increasingly being used to map intra-urban variability in heat-health risks, but there has been little effort to compare patterns and risk factors between cities. We sought to identify places within large metropolitan areas where the mortality rate is highest on hot summer days and determine if characteristics of high-risk areas are consistent from one city to another. A Poisson regression model was adapted to quantify temperature-mortality relationships at the postal code scale based on 2.1 million records of daily all-cause mortality counts from seven U.S. cities. Multivariate spatial regression models were then used to determine the demographic and environmental variables most closely associated with intra-city variability in risk. Significant mortality increases on extreme heat days were confined to 12-44% of postal codes comprising each city. Places with greater risk had more developed land, young, elderly, and minority residents, and lower income and educational attainment, but the key explanatory variables varied from one city to another. Regression models accounted for 14-34% of the spatial variability in heat-related mortality. The results emphasize the need for public health plans for heat to be locally tailored and not assume that pre-identified vulnerability indicators are universally applicable. As known risk factors accounted for no more than one third of the spatial variability in heat-health outcomes, consideration of health outcome data is important in efforts to identify and protect residents of the places where the heat-related health risks are the highest. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Seasonally-Dynamic Presence-Only Species Distribution Models for a Cryptic Migratory Bat Impacted by Wind Energy Development.

    PubMed

    Hayes, Mark A; Cryan, Paul M; Wunder, Michael B

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn-the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as 'risk from turbines is highest in habitats between hoary bat summering and wintering grounds'. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  5. Age and Adaptive Functioning in Children and Adolescents with ASD: The Effects of Intellectual Functioning and ASD Symptom Severity.

    PubMed

    Hill, Trenesha L; Gray, Sarah A O; Kamps, Jodi L; Enrique Varela, R

    2015-12-01

    The present study examined the moderating effects of intellectual functioning and ASD symptom severity on the relation between age and adaptive functioning in 220 youth with autism spectrum disorder (ASD). Regression analysis indicated that intellectual functioning and ASD symptom severity moderated the relation between age and adaptive functioning. For younger children with lower intellectual functioning, higher ASD symptom severity was associated with better adaptive functioning than that of those with lower ASD symptom severity. Similarly, for older children with higher intellectual functioning, higher ASD symptom severity was associated with better adaptive functioning than that of those with lower ASD symptom severity. Analyses by subscales suggest that this pattern is driven by the Conceptual subscale. Clinical and research implications are discussed.

  6. Age and Adaptive Functioning in Children and Adolescents with ASD: The Effects of Intellectual Functioning and ASD Symptom Severity

    PubMed Central

    Hill, Trenesha L.; Gray, Sarah A. O.; Kamps, Jodi L.; Varela, R. Enrique

    2016-01-01

    The present study examined the moderating effects of intellectual functioning and ASD symptom severity on the relation between age and adaptive functioning in 220 youth with autism spectrum disorder (ASD). Regression analysis indicated that intellectual functioning and ASD symptom severity moderated the relation between age and adaptive functioning. For younger children with lower intellectual functioning, higher ASD symptom severity was associated with better adaptive functioning than that of those with lower ASD symptom severity. Similarly, for older children with higher intellectual functioning, higher ASD symptom severity was associated with better adaptive functioning than that of those with lower ASD symptom severity. Analyses by subscales suggest that this pattern is driven by the Conceptual subscale. Clinical and research implications are discussed. PMID:26174048

  7. Adaptive Online Sequential ELM for Concept Drift Tackling

    PubMed Central

    Basaruddin, Chan

    2016-01-01

    A machine learning method needs to adapt to over time changes in the environment. Such changes are known as concept drift. In this paper, we propose concept drift tackling method as an enhancement of Online Sequential Extreme Learning Machine (OS-ELM) and Constructive Enhancement OS-ELM (CEOS-ELM) by adding adaptive capability for classification and regression problem. The scheme is named as adaptive OS-ELM (AOS-ELM). It is a single classifier scheme that works well to handle real drift, virtual drift, and hybrid drift. The AOS-ELM also works well for sudden drift and recurrent context change type. The scheme is a simple unified method implemented in simple lines of code. We evaluated AOS-ELM on regression and classification problem by using concept drift public data set (SEA and STAGGER) and other public data sets such as MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value compared to the multiclassifier ELM ensemble. Even though AOS-ELM in practice does not need hidden nodes increase, we address some issues related to the increasing of the hidden nodes such as error condition and rank values. We propose taking the rank of the pseudoinverse matrix as an indicator parameter to detect “underfitting” condition. PMID:27594879

  8. Eye regression in blind Astyanax cavefish may facilitate the evolution of an adaptive behavior and its sensory receptors.

    PubMed

    Borowsky, Richard

    2013-07-11

    The forces driving the evolutionary loss or simplification of traits such as vision and pigmentation in cave animals are still debated. Three alternative hypotheses are direct selection against the trait, genetic drift, and indirect selection due to antagonistic pleiotropy. Recent work establishes that Astyanax cavefish exhibit vibration attraction behavior (VAB), a presumed behavioral adaptation to finding food in the dark not exhibited by surface fish. Genetic analysis revealed two regions in the genome with quantitative trait loci (QTL) for both VAB and eye size. These observations were interpreted as genetic evidence that selection for VAB indirectly drove eye regression through antagonistic pleiotropy and, further, that this is a general mechanism to account for regressive evolution. These conclusions are unsupported by the data; the analysis fails to establish pleiotropy and ignores the numerous other QTL that map to, and potentially interact, in the same regions. It is likely that all three forces drive evolutionary change. We will be able to distinguish among them in individual cases only when we have identified the causative alleles and characterized their effects.

  9. Regression techniques for oceanographic parameter retrieval using space-borne microwave radiometry

    NASA Technical Reports Server (NTRS)

    Hofer, R.; Njoku, E. G.

    1981-01-01

    Variations of conventional multiple regression techniques are applied to the problem of remote sensing of oceanographic parameters from space. The techniques are specifically adapted to the scanning multichannel microwave radiometer (SMRR) launched on the Seasat and Nimbus 7 satellites to determine ocean surface temperature, wind speed, and atmospheric water content. The retrievals are studied primarily from a theoretical viewpoint, to illustrate the retrieval error structure, the relative importances of different radiometer channels, and the tradeoffs between spatial resolution and retrieval accuracy. Comparisons between regressions using simulated and actual SMMR data are discussed; they show similar behavior.

  10. Drought forecasting in eastern Australia using multivariate adaptive regression spline, least square support vector machine and M5Tree model

    NASA Astrophysics Data System (ADS)

    Deo, Ravinesh C.; Kisi, Ozgur; Singh, Vijay P.

    2017-02-01

    Drought forecasting using standardized metrics of rainfall is a core task in hydrology and water resources management. Standardized Precipitation Index (SPI) is a rainfall-based metric that caters for different time-scales at which the drought occurs, and due to its standardization, is well-suited for forecasting drought at different periods in climatically diverse regions. This study advances drought modelling using multivariate adaptive regression splines (MARS), least square support vector machine (LSSVM), and M5Tree models by forecasting SPI in eastern Australia. MARS model incorporated rainfall as mandatory predictor with month (periodicity), Southern Oscillation Index, Pacific Decadal Oscillation Index and Indian Ocean Dipole, ENSO Modoki and Nino 3.0, 3.4 and 4.0 data added gradually. The performance was evaluated with root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (r2). Best MARS model required different input combinations, where rainfall, sea surface temperature and periodicity were used for all stations, but ENSO Modoki and Pacific Decadal Oscillation indices were not required for Bathurst, Collarenebri and Yamba, and the Southern Oscillation Index was not required for Collarenebri. Inclusion of periodicity increased the r2 value by 0.5-8.1% and reduced RMSE by 3.0-178.5%. Comparisons showed that MARS superseded the performance of the other counterparts for three out of five stations with lower MAE by 15.0-73.9% and 7.3-42.2%, respectively. For the other stations, M5Tree was better than MARS/LSSVM with lower MAE by 13.8-13.4% and 25.7-52.2%, respectively, and for Bathurst, LSSVM yielded more accurate result. For droughts identified by SPI ≤ - 0.5, accurate forecasts were attained by MARS/M5Tree for Bathurst, Yamba and Peak Hill, whereas for Collarenebri and Barraba, M5Tree was better than LSSVM/MARS. Seasonal analysis revealed disparate results where MARS/M5Tree was better than LSSVM. The results highlight the importance of periodicity in drought forecasting and also ascertains that model accuracy scales with geographic/seasonal factors due to complexity of drought and its relationship with inputs and data attributes that can affect the evolution of drought events.

  11. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  12. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  13. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  14. Regionalisation of statistical model outputs creating gridded data sets for Germany

    NASA Astrophysics Data System (ADS)

    Höpp, Simona Andrea; Rauthe, Monika; Deutschländer, Thomas

    2016-04-01

    The goal of the German research program ReKliEs-De (regional climate projection ensembles for Germany, http://.reklies.hlug.de) is to distribute robust information about the range and the extremes of future climate for Germany and its neighbouring river catchment areas. This joint research project is supported by the German Federal Ministry of Education and Research (BMBF) and was initiated by the German Federal States. The Project results are meant to support the development of adaptation strategies to mitigate the impacts of future climate change. The aim of our part of the project is to adapt and transfer the regionalisation methods of the gridded hydrological data set (HYRAS) from daily station data to the station based statistical regional climate model output of WETTREG (regionalisation method based on weather patterns). The WETTREG model output covers the period of 1951 to 2100 with a daily temporal resolution. For this, we generate a gridded data set of the WETTREG output for precipitation, air temperature and relative humidity with a spatial resolution of 12.5 km x 12.5 km, which is common for regional climate models. Thus, this regionalisation allows comparing statistical to dynamical climate model outputs. The HYRAS data set was developed by the German Meteorological Service within the German research program KLIWAS (www.kliwas.de) and consists of daily gridded data for Germany and its neighbouring river catchment areas. It has a spatial resolution of 5 km x 5 km for the entire domain for the hydro-meteorological elements precipitation, air temperature and relative humidity and covers the period of 1951 to 2006. After conservative remapping the HYRAS data set is also convenient for the validation of climate models. The presentation will consist of two parts to present the actual state of the adaptation of the HYRAS regionalisation methods to the statistical regional climate model WETTREG: First, an overview of the HYRAS data set and the regionalisation methods for precipitation (REGNIE method based on a combination of multiple linear regression with 5 predictors and inverse distance weighting), air temperature and relative humidity (optimal interpolation) will be given. Finally, results of the regionalisation of WETTREG model output will be shown.

  15. Adaptive and maladaptive perfectionism, and professional burnout among medical laboratory scientists.

    PubMed

    Robakowska, Marlena; Tyrańska-Fobke, Anna; Walkiewicz, Maciej; Tartas, Małgorzata

    2018-05-22

    The goal of this paper is to verify the correlations between adaptive and maladaptive perfectionism and the selected demographic and job characteristics vs. professional burnout among medical laboratory scientists in Poland. The study group consisted of 166 laboratory scientists. The Polish Adaptive and Maladaptive Perfectionism Questionnaire (Szczucka) was used for testing perfectionism. The Oldenburg Burnout Inventory was used for examining burnout syndrome. Adaptive perfectionism was positively and maladaptive perfectionism was negatively correlated with both aspects of professional burnout: the disengagement from work and exhaustion. What is more, maladaptive perfectionism was correlated negatively with age and work experience. People in relationships have a higher level of disengagement and a higher level of exhaustion than single ones. The results of hierarchical regression analyses have revealed, after having controlled selected demographic and job factors, that a significant predictor of disengagement is the high level of adaptive perfectionism and low level of maladaptive perfectionism. In addition, a significant predictor of high level of exhaustion is the low level of maladaptive perfectionism. Professional burnout among medical laboratory scientists is of a specific nature. The "healthier" perfectionism they reveal, the higher level of burnout they present. In this profession, lower risk of burnout is represented by those who are characterized by the lack of confidence in the quality of their actions and a negative reaction to their own imperfections associated with imposed social obligation to be perfect. The individuals pursuing their internal high standards experience burnout faster. Med Pr 2018;69(3):253-260. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  16. The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng

    2017-05-30

    The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control.

  17. The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng

    2017-01-01

    The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control. PMID:28556817

  18. An Interactive Tool For Semi-automated Statistical Prediction Using Earth Observations and Models

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Berhane, F.; Tadesse, T.

    2015-12-01

    We developed a semi-automated statistical prediction tool applicable to concurrent analysis or seasonal prediction of any time series variable in any geographic location. The tool was developed using Shiny, JavaScript, HTML and CSS. A user can extract a predictand by drawing a polygon over a region of interest on the provided user interface (global map). The user can select the Climatic Research Unit (CRU) precipitation or Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) as predictand. They can also upload their own predictand time series. Predictors can be extracted from sea surface temperature, sea level pressure, winds at different pressure levels, air temperature at various pressure levels, and geopotential height at different pressure levels. By default, reanalysis fields are applied as predictors, but the user can also upload their own predictors, including a wide range of compatible satellite-derived datasets. The package generates correlations of the variables selected with the predictand. The user also has the option to generate composites of the variables based on the predictand. Next, the user can extract predictors by drawing polygons over the regions that show strong correlations (composites). Then, the user can select some or all of the statistical prediction models provided. Provided models include Linear Regression models (GLM, SGLM), Tree-based models (bagging, random forest, boosting), Artificial Neural Network, and other non-linear models such as Generalized Additive Model (GAM) and Multivariate Adaptive Regression Splines (MARS). Finally, the user can download the analysis steps they used, such as the region they selected, the time period they specified, the predictand and predictors they chose and preprocessing options they used, and the model results in PDF or HTML format. Key words: Semi-automated prediction, Shiny, R, GLM, ANN, RF, GAM, MARS

  19. Survival analysis with error-prone time-varying covariates: a risk set calibration approach

    PubMed Central

    Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna

    2010-01-01

    Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928

  20. Temperature-based estimation of global solar radiation using soft computing methodologies

    NASA Astrophysics Data System (ADS)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak

    2016-07-01

    Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.

  1. Long-range forecast of all India summer monsoon rainfall using adaptive neuro-fuzzy inference system: skill comparison with CFSv2 model simulation and real-time forecast for the year 2015

    NASA Astrophysics Data System (ADS)

    Chaudhuri, S.; Das, D.; Goswami, S.; Das, S. K.

    2016-11-01

    All India summer monsoon rainfall (AISMR) characteristics play a vital role for the policy planning and national economy of the country. In view of the significant impact of monsoon system on regional as well as global climate systems, accurate prediction of summer monsoon rainfall has become a challenge. The objective of this study is to develop an adaptive neuro-fuzzy inference system (ANFIS) for long range forecast of AISMR. The NCEP/NCAR reanalysis data of temperature, zonal and meridional wind at different pressure levels have been taken to construct the input matrix of ANFIS. The membership of the input parameters for AISMR as high, medium or low is estimated with trapezoidal membership function. The fuzzified standardized input parameters and the de-fuzzified target output are trained with artificial neural network models. The forecast of AISMR with ANFIS is compared with non-hybrid multi-layer perceptron model (MLP), radial basis functions network (RBFN) and multiple linear regression (MLR) models. The forecast error analyses of the models reveal that ANFIS provides the best forecast of AISMR with minimum prediction error of 0.076, whereas the errors with MLP, RBFN and MLR models are 0.22, 0.18 and 0.73 respectively. During validation with observations, ANFIS shows its potency over the said comparative models. Performance of the ANFIS model is verified through different statistical skill scores, which also confirms the aptitude of ANFIS in forecasting AISMR. The forecast skill of ANFIS is also observed to be better than Climate Forecast System version 2. The real-time forecast with ANFIS shows possibility of deficit (65-75 cm) AISMR in the year 2015.

  2. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    PubMed

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2018-01-01

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  3. Closed-form solution for static pull-in voltage of electrostatically actuated clamped-clamped micro/nano beams under the effect of fringing field and van der Waals force

    NASA Astrophysics Data System (ADS)

    Bhojawala, V. M.; Vakharia, D. P.

    2017-12-01

    This investigation provides an accurate prediction of static pull-in voltage for clamped-clamped micro/nano beams based on distributed model. The Euler-Bernoulli beam theory is used adapting geometric non-linearity of beam, internal (residual) stress, van der Waals force, distributed electrostatic force and fringing field effects for deriving governing differential equation. The Galerkin discretisation method is used to make reduced-order model of the governing differential equation. A regime plot is presented in the current work for determining the number of modes required in reduced-order model to obtain completely converged pull-in voltage for micro/nano beams. A closed-form relation is developed based on the relationship obtained from curve fitting of pull-in instability plots and subsequent non-linear regression for the proposed relation. The output of regression analysis provides Chi-square (χ 2) tolerance value equals to 1  ×  10-9, adjusted R-square value equals to 0.999 29 and P-value equals to zero, these statistical parameters indicate the convergence of non-linear fit, accuracy of fitted data and significance of the proposed model respectively. The closed-form equation is validated using available data of experimental and numerical results. The relative maximum error of 4.08% in comparison to several available experimental and numerical data proves the reliability of the proposed closed-form equation.

  4. Human motion tracking by temporal-spatial local gaussian process experts.

    PubMed

    Zhao, Xu; Fu, Yun; Liu, Yuncai

    2011-04-01

    Human pose estimation via motion tracking systems can be considered as a regression problem within a discriminative framework. It is always a challenging task to model the mapping from observation space to state space because of the high-dimensional characteristic in the multimodal conditional distribution. In order to build the mapping, existing techniques usually involve a large set of training samples in the learning process which are limited in their capability to deal with multimodality. We propose, in this work, a novel online sparse Gaussian Process (GP) regression model to recover 3-D human motion in monocular videos. Particularly, we investigate the fact that for a given test input, its output is mainly determined by the training samples potentially residing in its local neighborhood and defined in the unified input-output space. This leads to a local mixture GP experts system composed of different local GP experts, each of which dominates a mapping behavior with the specific covariance function adapting to a local region. To handle the multimodality, we combine both temporal and spatial information therefore to obtain two categories of local experts. The temporal and spatial experts are integrated into a seamless hybrid system, which is automatically self-initialized and robust for visual tracking of nonlinear human motion. Learning and inference are extremely efficient as all the local experts are defined online within very small neighborhoods. Extensive experiments on two real-world databases, HumanEva and PEAR, demonstrate the effectiveness of our proposed model, which significantly improve the performance of existing models.

  5. Parametric regression model for survival data: Weibull regression model as an example

    PubMed Central

    2016-01-01

    Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846

  6. Effects of diurnal temperature range and drought on wheat yield in Spain

    NASA Astrophysics Data System (ADS)

    Hernandez-Barrera, S.; Rodriguez-Puebla, C.; Challinor, A. J.

    2017-07-01

    This study aims to provide new insight on the wheat yield historical response to climate processes throughout Spain by using statistical methods. Our data includes observed wheat yield, pseudo-observations E-OBS for the period 1979 to 2014, and outputs of general circulation models in phase 5 of the Coupled Models Inter-comparison Project (CMIP5) for the period 1901 to 2099. In investigating the relationship between climate and wheat variability, we have applied the approach known as the partial least-square regression, which captures the relevant climate drivers accounting for variations in wheat yield. We found that drought occurring in autumn and spring and the diurnal range of temperature experienced during the winter are major processes to characterize the wheat yield variability in Spain. These observable climate processes are used for an empirical model that is utilized in assessing the wheat yield trends in Spain under different climate conditions. To isolate the trend within the wheat time series, we implemented the adaptive approach known as Ensemble Empirical Mode Decomposition. Wheat yields in the twenty-first century are experiencing a downward trend that we claim is a consequence of widespread drought over the Iberian Peninsula and an increase in the diurnal range of temperature. These results are important to inform about the wheat vulnerability in this region to coming changes and to develop adaptation strategies.

  7. Introduction to the use of regression models in epidemiology.

    PubMed

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  8. Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines

    NASA Astrophysics Data System (ADS)

    Akyurek, Z.; Kuter, S.; Weber, G. W.

    2016-12-01

    Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model was 0.1500. MARS estimates for low FSC values (i.e., FSC<0.3) were better than that of ANN. Both ANN and MARS tended to overestimate medium FSC values (i.e., 0.30.7).

  9. mPLR-Loc: an adaptive decision multi-label classifier based on penalized logistic regression for protein subcellular localization prediction.

    PubMed

    Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan

    2015-03-15

    Proteins located in appropriate cellular compartments are of paramount importance to exert their biological functions. Prediction of protein subcellular localization by computational methods is required in the post-genomic era. Recent studies have been focusing on predicting not only single-location proteins but also multi-location proteins. However, most of the existing predictors are far from effective for tackling the challenges of multi-label proteins. This article proposes an efficient multi-label predictor, namely mPLR-Loc, based on penalized logistic regression and adaptive decisions for predicting both single- and multi-location proteins. Specifically, for each query protein, mPLR-Loc exploits the information from the Gene Ontology (GO) database by using its accession number (AC) or the ACs of its homologs obtained via BLAST. The frequencies of GO occurrences are used to construct feature vectors, which are then classified by an adaptive decision-based multi-label penalized logistic regression classifier. Experimental results based on two recent stringent benchmark datasets (virus and plant) show that mPLR-Loc remarkably outperforms existing state-of-the-art multi-label predictors. In addition to being able to rapidly and accurately predict subcellular localization of single- and multi-label proteins, mPLR-Loc can also provide probabilistic confidence scores for the prediction decisions. For readers' convenience, the mPLR-Loc server is available online (http://bioinfo.eie.polyu.edu.hk/mPLRLocServer). Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Regional flow duration curves: Geostatistical techniques versus multivariate regression

    USGS Publications Warehouse

    Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.

    2016-01-01

    A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.

  11. Interpretation of commonly used statistical regression models.

    PubMed

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  13. Adaptive and accelerated tracking-learning-detection

    NASA Astrophysics Data System (ADS)

    Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu

    2013-08-01

    An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.

  14. Relating adaptive genetic traits to climate for Sandberg bluegrass from the intermountain western United States.

    PubMed

    Johnson, Richard C; Horning, Matthew E; Espeland, Erin K; Vance-Borland, Ken

    2015-02-01

    Genetic variation for potentially adaptive traits of the key restoration species Sandberg bluegrass (Poa secunda J. Presl) was assessed over the intermountain western United States in relation to source population climate. Common gardens were established at two intermountain west sites with progeny from two maternal parents from each of 130 wild populations. Data were collected over 2 years at each site on fifteen plant traits associated with production, phenology, and morphology. Analyses of variance revealed strong population differences for all plant traits (P < 0.0001), indicating genetic variation. Both the canonical correlation and linear correlation established associations between source populations and climate variability. Populations from warmer, more arid climates had generally lower dry weight, earlier phenology, and smaller, narrower leaves than those from cooler, moister climates. The first three canonical variates were regressed with climate variables resulting in significant models (P < 0.0001) used to map 12 seed zones. Of the 700 981 km(2) mapped, four seed zones represented 92% of the area in typically semi-arid and arid regions. The association of genetic variation with source climates in the intermountain west suggested climate driven natural selection and evolution. We recommend seed transfer zones and population movement guidelines to enhance adaptation and diversity for large-scale restoration projects.

  15. Climate change: believing and seeing implies adapting.

    PubMed

    Blennow, Kristina; Persson, Johannes; Tomé, Margarida; Hanewinkel, Marc

    2012-01-01

    Knowledge of factors that trigger human response to climate change is crucial for effective climate change policy communication. Climate change has been claimed to have low salience as a risk issue because it cannot be directly experienced. Still, personal factors such as strength of belief in local effects of climate change have been shown to correlate strongly with responses to climate change and there is a growing literature on the hypothesis that personal experience of climate change (and/or its effects) explains responses to climate change. Here we provide, using survey data from 845 private forest owners operating in a wide range of bio-climatic as well as economic-social-political structures in a latitudinal gradient across Europe, the first evidence that the personal strength of belief and perception of local effects of climate change, highly significantly explain human responses to climate change. A logistic regression model was fitted to the two variables, estimating expected probabilities ranging from 0.07 (SD ± 0.01) to 0.81 (SD ± 0.03) for self-reported adaptive measures taken. Adding socio-demographic variables improved the fit, estimating expected probabilities ranging from 0.022 (SD ± 0.008) to 0.91 (SD ± 0.02). We conclude that to explain and predict adaptation to climate change, the combination of personal experience and belief must be considered.

  16. A general practice-based study of the relationship between indicators of mental illness and challenging behaviour among adults with intellectual disabilities.

    PubMed

    Felce, D; Kerr, M; Hastings, R P

    2009-03-01

    Existing studies tend to show a positive association between mental illness and challenging behaviour among adults with intellectual disabilities (ID). However, whether the association is direct or artefactual is less clear. The purpose was to explore the association between psychiatric status and level of challenging behaviour, while controlling for adaptive behaviour and occurrence of autistic spectrum disorders. Data were collected on the age, gender, adaptive and challenging behaviour, social impairment and psychiatric status of 312 adults with ID. Participants were divided according to psychiatric status, group equivalence in adaptive behaviour and the presence of autistic spectrum disorders achieved, and differences in challenging behaviour explored. In addition, multiple regression was used to examine the association between psychiatric status and challenging behaviour after controlling for other participant characteristics and to test whether the interaction between psychiatric status and adaptive behaviour added significantly to explanation. Challenging behaviour was higher among participants meeting threshold levels on the psychiatric screen. The regression analysis confirmed the association and demonstrated an interaction between total score on the psychiatric screen and level of adaptive behaviour. This moderated effect showed the relationship between psychiatric status and challenging behaviour to be stronger at lower adaptive behaviour. This study reinforces previous findings that psychiatric morbidity among people with ID is associated with higher levels of challenging behaviour and supports predictions that this association is more pronounced for people with severe ID. The precise nature and causal direction of the association requires further clarification. However, the understanding of how psychiatric problems might contribute to challenging behaviour needs to be part of the clinical appreciation of such behaviour.

  17. A comparison between ten advanced and soft computing models for groundwater qanat potential assessment in Iran using R and GIS

    NASA Astrophysics Data System (ADS)

    Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Abbaspour, Karim

    2018-02-01

    Considering the unstable condition of water resources in Iran and many other countries in arid and semi-arid regions, groundwater studies are very important. Therefore, the aim of this study is to model groundwater potential by qanat locations as indicators and ten advanced and soft computing models applied to the Beheshtabad Watershed, Iran. Qanat is a man-made underground construction which gathers groundwater from higher altitudes and transmits it to low land areas where it can be used for different purposes. For this purpose, at first, the location of the qanats was detected using extensive field surveys. These qanats were classified into two datasets including training (70%) and validation (30%). Then, 14 influence factors depicting the region's physical, morphological, lithological, and hydrological features were identified to model groundwater potential. Linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), flexible discriminant analysis (FDA), penalized discriminant analysis (PDA), boosted regression tree (BRT), random forest (RF), artificial neural network (ANN), K-nearest neighbor (KNN), multivariate adaptive regression splines (MARS), and support vector machine (SVM) models were applied in R scripts to produce groundwater potential maps. For evaluation of the performance accuracies of the developed models, ROC curve and kappa index were implemented. According to the results, RF had the best performance, followed by SVM and BRT models. Our results showed that qanat locations could be used as a good indicator for groundwater potential. Furthermore, altitude, slope, plan curvature, and profile curvature were found to be the most important influence factors. On the other hand, lithology, land use, and slope aspect were the least significant factors. The methodology in the current study could be used by land use and terrestrial planners and water resource managers to reduce the costs of groundwater resource discovery.

  18. Users manual for flight control design programs

    NASA Technical Reports Server (NTRS)

    Nalbandian, J. Y.

    1975-01-01

    Computer programs for the design of analog and digital flight control systems are documented. The program DIGADAPT uses linear-quadratic-gaussian synthesis algorithms in the design of command response controllers and state estimators, and it applies covariance propagation analysis to the selection of sampling intervals for digital systems. Program SCHED executes correlation and regression analyses for the development of gain and trim schedules to be used in open-loop explicit-adaptive control laws. A linear-time-varying simulation of aircraft motions is provided by the program TVHIS, which includes guidance and control logic, as well as models for control actuator dynamics. The programs are coded in FORTRAN and are compiled and executed on both IBM and CDC computers.

  19. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Qi, X.; Sheng, K.

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside amore » given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may be caused by the biomechanical deformation process. Accuracy and stability of the model response were validated using ground-truth simulations representing soft tissue behavior under local and global deformations. Numerical accuracy of the HN deformations was analyzed by applying nonrigid skeletal transformations acquired from interfraction kVCT images to the model’s skeletal structures and comparing the subsequent soft tissue deformations of the model with the clinical anatomy. Results: The GPU based framework enabled the model deformation to be performed at 60 frames/s, facilitating simulations of posture changes and physiological regressions at interactive speeds. The soft tissue response was accurate with a R{sup 2} value of >0.98 when compared to ground-truth global and local force deformation analysis. The deformation of the HN anatomy by the model agreed with the clinically observed deformations with an average correlation coefficient of 0.956. For a clinically relevant range of posture and physiological changes, the model deformations stabilized with an uncertainty of less than 0.01 mm. Conclusions: Documenting dose delivery for HN radiotherapy is essential accounting for posture and physiological changes. The biomechanical model discussed in this paper was able to deform in real-time, allowing interactive simulations and visualization of such changes. The model would allow patient specific validations of the DIR method and has the potential to be a significant aid in adaptive radiotherapy techniques.« less

  20. Estimation of soil clay and organic matter using two quantitative methods (PLSR and MARS) based on reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Nawar, Said; Buddenbaum, Henning; Hill, Joachim

    2014-05-01

    A rapid and inexpensive soil analytical technique is needed for soil quality assessment and accurate mapping. This study investigated a method for improved estimation of soil clay (SC) and organic matter (OM) using reflectance spectroscopy. Seventy soil samples were collected from Sinai peninsula in Egypt to estimate the soil clay and organic matter relative to the soil spectra. Soil samples were scanned with an Analytical Spectral Devices (ASD) spectrometer (350-2500 nm). Three spectral formats were used in the calibration models derived from the spectra and the soil properties: (1) original reflectance spectra (OR), (2) first-derivative spectra smoothened using the Savitzky-Golay technique (FD-SG) and (3) continuum-removed reflectance (CR). Partial least-squares regression (PLSR) models using the CR of the 400-2500 nm spectral region resulted in R2 = 0.76 and 0.57, and RPD = 2.1 and 1.5 for estimating SC and OM, respectively, indicating better performance than that obtained using OR and SG. The multivariate adaptive regression splines (MARS) calibration model with the CR spectra resulted in an improved performance (R2 = 0.89 and 0.83, RPD = 3.1 and 2.4) for estimating SC and OM, respectively. The results show that the MARS models have a great potential for estimating SC and OM compared with PLSR models. The results obtained in this study have potential value in the field of soil spectroscopy because they can be applied directly to the mapping of soil properties using remote sensing imagery in arid environment conditions. Key Words: soil clay, organic matter, PLSR, MARS, reflectance spectroscopy.

  1. Early origins of inflammation: An examination of prenatal and childhood social adversity in a prospective cohort study.

    PubMed

    Slopen, Natalie; Loucks, Eric B; Appleton, Allison A; Kawachi, Ichiro; Kubzansky, Laura D; Non, Amy L; Buka, Stephen; Gilman, Stephen E

    2015-01-01

    Children exposed to social adversity carry a greater risk of poor physical and mental health into adulthood. This increased risk is thought to be due, in part, to inflammatory processes associated with early adversity that contribute to the etiology of many adult illnesses. The current study asks whether aspects of the prenatal social environment are associated with levels of inflammation in adulthood, and whether prenatal and childhood adversity both contribute to adult inflammation. We examined associations of prenatal and childhood adversity assessed through direct interviews of participants in the Collaborative Perinatal Project between 1959 and 1974 with blood levels of C-reactive protein in 355 offspring interviewed in adulthood (mean age=42.2 years). Linear and quantile regression models were used to estimate the effects of prenatal adversity and childhood adversity on adult inflammation, adjusting for age, sex, and race and other potential confounders. In separate linear regression models, high levels of prenatal and childhood adversity were associated with higher CRP in adulthood. When prenatal and childhood adversity were analyzed together, our results support the presence of an effect of prenatal adversity on (log) CRP level in adulthood (β=0.73, 95% CI: 0.26, 1.20) that is independent of childhood adversity and potential confounding factors including maternal health conditions reported during pregnancy. Supplemental analyses revealed similar findings using quantile regression models and logistic regression models that used a clinically-relevant CRP threshold (>3mg/L). In a fully-adjusted model that included childhood adversity, high prenatal adversity was associated with a 3-fold elevated odds (95% CI: 1.15, 8.02) of having a CRP level in adulthood that indicates high risk of cardiovascular disease. Social adversity during the prenatal period is a risk factor for elevated inflammation in adulthood independent of adversities during childhood. This evidence is consistent with studies demonstrating that adverse exposures in the maternal environment during gestation have lasting effects on development of the immune system. If these results reflect causal associations, they suggest that interventions to improve the social and environmental conditions of pregnancy would promote health over the life course. It remains necessary to identify the mechanisms that link maternal conditions during pregnancy to the development of fetal immune and other systems involved in adaptation to environmental stressors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Influence of age on adaptability of human mastication.

    PubMed

    Peyron, Marie-Agnès; Blanc, Olivier; Lund, James P; Woda, Alain

    2004-08-01

    The objective of this work was to study the influence of age on the ability of subjects to adapt mastication to changes in the hardness of foods. The study was carried out on 67 volunteers aged from 25 to 75 yr (29 males, 38 females) who had complete healthy dentitions. Surface electromyograms of the left and right masseter and temporalis muscles were recorded simultaneously with jaw movements using an electromagnetic transducer. Each volunteer was asked to chew and swallow four visco-elastic model foods of different hardness, each presented three times in random order. The number of masticatory cycles, their frequency, and the sum of all electromyographic (EMG) activity in all four muscles were calculated for each masticatory sequence. Multiple linear regression analyses were used to assess the effects of hardness, age, and gender. Hardness was associated to an increase in the mean number of cycles and mean summed EMG activity per sequence. It also increased mean vertical amplitude. Mean vertical amplitude and mean summed EMG activity per sequence were higher in males. These adaptations were present at all ages. Age was associated with an increase of 0.3 cycles per sequence per year of life and with a progressive increase in mean summed EMG activity per sequence. Cycle and opening duration early in the sequence also fell with age. We concluded that although the number of cycles needed to chew a standard piece of food increases progressively with age, the capacity to adapt to changes in the hardness of food is maintained.

  3. Development and validation of a shared decision-making instrument for health-related quality of life one year after total hip replacement based on quality registries data.

    PubMed

    Nemes, Szilard; Rolfson, Ola; Garellick, Göran

    2018-02-01

    Clinicians considering improvements in health-related quality of life (HRQoL) after total hip replacement (THR) must account for multiple pieces of information. Evidence-based decisions are important to best assess the effect of THR on HRQoL. This work aims at constructing a shared decision-making tool that helps clinicians assessing the future benefits of THR by offering predictions of 1-year postoperative HRQoL of THR patients. We used data from the Swedish Hip Arthroplasty Register. Data from 2008 were used as training set and data from 2009 to 2012 as validation set. We adopted two approaches. First, we assumed a continuous distribution for the EQ-5D index and modelled the postoperative EQ-5D index with regression models. Second, we modelled the five dimensions of the EQ-5D and weighted together the predictions using the UK Time Trade-Off value set. As predictors, we used preoperative EQ-5D dimensions and the EQ-5D index, EQ visual analogue scale, visual analogue scale pain, Charnley classification, age, gender, body mass index, American Society of Anesthesiologists, surgical approach and prosthesis type. Additionally, the tested algorithms were combined in a single predictive tool by stacking. Best predictive power was obtained by the multivariate adaptive regression splines (R 2  = 0.158). However, this was not significantly better than the predictive power of linear regressions (R 2  = 0.157). The stacked model had a predictive power of 17%. Successful implementation of a shared decision-making tool that can aid clinicians and patients in understanding expected improvement in HRQoL following THR would require higher predictive power than we achieved. For a shared decision-making tool to succeed, further variables, such as socioeconomics, need to be considered. © 2016 John Wiley & Sons, Ltd.

  4. A Study on the Potential Applications of Satellite Data in Air Quality Monitoring and Forecasting

    NASA Technical Reports Server (NTRS)

    Li, Can; Hsu, N. Christina; Tsay, Si-Chee

    2011-01-01

    In this study we explore the potential applications of MODIS (Moderate Resolution Imaging Spectroradiometer) -like satellite sensors in air quality research for some Asian regions. The MODIS aerosol optical thickness (AOT), NCEP global reanalysis meteorological data, and daily surface PM(sub 10) concentrations over China and Thailand from 2001 to 2009 were analyzed using simple and multiple regression models. The AOT-PM(sub 10) correlation demonstrates substantial seasonal and regional difference, likely reflecting variations in aerosol composition and atmospheric conditions, Meteorological factors, particularly relative humidity, were found to influence the AOT-PM(sub 10) relationship. Their inclusion in regression models leads to more accurate assessment of PM(sub 10) from space borne observations. We further introduced a simple method for employing the satellite data to empirically forecast surface particulate pollution, In general, AOT from the previous day (day 0) is used as a predicator variable, along with the forecasted meteorology for the following day (day 1), to predict the PM(sub 10) level for day 1. The contribution of regional transport is represented by backward trajectories combined with AOT. This method was evaluated through PM(sub 10) hindcasts for 2008-2009, using ohservations from 2005 to 2007 as a training data set to obtain model coefficients. For five big Chinese cities, over 50% of the hindcasts have percentage error less than or equal to 30%. Similar performance was achieved for cities in northern Thailand. The MODIS AOT data are responsible for at least part of the demonstrated forecasting skill. This method can be easily adapted for other regions, but is probably most useful for those having sparse ground monitoring networks or no access to sophisticated deterministic models. We also highlight several existing issues, including some inherent to a regression-based approach as exemplified by a case study for Beijing, Further studies will be necessa1Y before satellite data can see more extensive applications in the operational air quality monitoring and forecasting.

  5. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures.

    PubMed

    Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-05

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. A new approach to correct the QT interval for changes in heart rate using a nonparametric regression model in beagle dogs.

    PubMed

    Watanabe, Hiroyuki; Miyazaki, Hiroyasu

    2006-01-01

    Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.

  7. Design of a fuzzy differential evolution algorithm to predict non-deposition sediment transport

    NASA Astrophysics Data System (ADS)

    Ebtehaj, Isa; Bonakdari, Hossein

    2017-12-01

    Since the flow entering a sewer contains solid matter, deposition at the bottom of the channel is inevitable. It is difficult to understand the complex, three-dimensional mechanism of sediment transport in sewer pipelines. Therefore, a method to estimate the limiting velocity is necessary for optimal designs. Due to the inability of gradient-based algorithms to train Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for non-deposition sediment transport prediction, a new hybrid ANFIS method based on a differential evolutionary algorithm (ANFIS-DE) is developed. The training and testing performance of ANFIS-DE is evaluated using a wide range of dimensionless parameters gathered from the literature. The input combination used to estimate the densimetric Froude number ( Fr) parameters includes the volumetric sediment concentration ( C V ), ratio of median particle diameter to hydraulic radius ( d/R), ratio of median particle diameter to pipe diameter ( d/D) and overall friction factor of sediment ( λ s ). The testing results are compared with the ANFIS model and regression-based equation results. The ANFIS-DE technique predicted sediment transport at limit of deposition with lower root mean square error (RMSE = 0.323) and mean absolute percentage of error (MAPE = 0.065) and higher accuracy ( R 2 = 0.965) than the ANFIS model and regression-based equations.

  8. Development of property-transfer models for estimating the hydraulic properties of deep sediments at the Idaho National Engineering and Environmental Laboratory, Idaho

    USGS Publications Warehouse

    Winfield, Kari A.

    2005-01-01

    Because characterizing the unsaturated hydraulic properties of sediments over large areas or depths is costly and time consuming, development of models that predict these properties from more easily measured bulk-physical properties is desirable. At the Idaho National Engineering and Environmental Laboratory, the unsaturated zone is composed of thick basalt flow sequences interbedded with thinner sedimentary layers. Determining the unsaturated hydraulic properties of sedimentary layers is one step in understanding water flow and solute transport processes through this complex unsaturated system. Multiple linear regression was used to construct simple property-transfer models for estimating the water-retention curve and saturated hydraulic conductivity of deep sediments at the Idaho National Engineering and Environmental Laboratory. The regression models were developed from 109 core sample subsets with laboratory measurements of hydraulic and bulk-physical properties. The core samples were collected at depths of 9 to 175 meters at two facilities within the southwestern portion of the Idaho National Engineering and Environmental Laboratory-the Radioactive Waste Management Complex, and the Vadose Zone Research Park southwest of the Idaho Nuclear Technology and Engineering Center. Four regression models were developed using bulk-physical property measurements (bulk density, particle density, and particle size) as the potential explanatory variables. Three representations of the particle-size distribution were compared: (1) textural-class percentages (gravel, sand, silt, and clay), (2) geometric statistics (mean and standard deviation), and (3) graphical statistics (median and uniformity coefficient). The four response variables, estimated from linear combinations of the bulk-physical properties, included saturated hydraulic conductivity and three parameters that define the water-retention curve. For each core sample,values of each water-retention parameter were estimated from the appropriate regression equation and used to calculate an estimated water-retention curve. The degree to which the estimated curve approximated the measured curve was quantified using a goodness-of-fit indicator, the root-mean-square error. Comparison of the root-mean-square-error distributions for each alternative particle-size model showed that the estimated water-retention curves were insensitive to the way the particle-size distribution was represented. Bulk density, the median particle diameter, and the uniformity coefficient were chosen as input parameters for the final models. The property-transfer models developed in this study allow easy determination of hydraulic properties without need for their direct measurement. Additionally, the models provide the basis for development of theoretical models that rely on physical relationships between the pore-size distribution and the bulk-physical properties of the media. With this adaptation, the property-transfer models should have greater application throughout the Idaho National Engineering and Environmental Laboratory and other geographic locations.

  9. Associations between risk perception, spontaneous adaptation behavior to heat waves and heatstroke in Guangdong province, China

    PubMed Central

    2013-01-01

    Background In many parts of the world, including in China, extreme heat events or heat waves are likely to increase in intensity, frequency, and duration in light of climate change in the next decades. Risk perception and adaptation behaviors are two important components in reducing the health impacts of heat waves, but little is known about their relationships in China. This study aimed to examine the associations between risk perception to heat waves, adaptation behaviors, and heatstroke among the public in Guangdong province, China. Methods A total of 2,183 adult participants were selected using a four-stage sampling method in Guangdong province. From September to November of 2010 each subject was interviewed at home by a well-trained investigator using a structured questionnaire. The information collected included socio-demographic characteristics, risk perception and spontaneous adaptation behaviors during heat wave periods, and heatstroke experience in the last year. Chi-square tests and unconditional logistic regression models were employed to analyze the data. Results This study found that 14.8%, 65.3% and 19.9% of participants perceived heat waves as a low, moderate or high health risk, respectively. About 99.1% participants employed at least one spontaneous adaptation behavior, and 26.2%, 51.2% and 22.6% respondents employed <4, 4–7, and >7 adaptation behaviors during heat waves, respectively. Individuals with moderate (OR=2.93, 95% CI: 1.38-6.22) or high (OR=10.58, 95% CI: 4.74-23.63) risk perception experienced more heatstroke in the past year than others. Drinking more water and wearing light clothes in urban areas, while decreasing activity as well as wearing light clothes in rural areas were negatively associated with heatstroke. Individuals with high risk perception and employing <4 adaptation behaviors during heat waves had the highest risks of heatstroke (OR=47.46, 95% CI: 12.82-175.73). Conclusions There is a large room for improving health risk perception and adaptation capacity to heat waves among the public of Guangdong province. People with higher risk perception and fewer adaptation behaviors during heat waves may be more vulnerable to heat waves. PMID:24088302

  10. Associations between risk perception, spontaneous adaptation behavior to heat waves and heatstroke in Guangdong province, China.

    PubMed

    Liu, Tao; Xu, Yan Jun; Zhang, Yong Hui; Yan, Qing Hua; Song, Xiu Ling; Xie, Hui Yan; Luo, Yuan; Rutherford, Shannon; Chu, Cordia; Lin, Hua Liang; Ma, Wen Jun

    2013-10-02

    In many parts of the world, including in China, extreme heat events or heat waves are likely to increase in intensity, frequency, and duration in light of climate change in the next decades. Risk perception and adaptation behaviors are two important components in reducing the health impacts of heat waves, but little is known about their relationships in China. This study aimed to examine the associations between risk perception to heat waves, adaptation behaviors, and heatstroke among the public in Guangdong province, China. A total of 2,183 adult participants were selected using a four-stage sampling method in Guangdong province. From September to November of 2010 each subject was interviewed at home by a well-trained investigator using a structured questionnaire. The information collected included socio-demographic characteristics, risk perception and spontaneous adaptation behaviors during heat wave periods, and heatstroke experience in the last year. Chi-square tests and unconditional logistic regression models were employed to analyze the data. This study found that 14.8%, 65.3% and 19.9% of participants perceived heat waves as a low, moderate or high health risk, respectively. About 99.1% participants employed at least one spontaneous adaptation behavior, and 26.2%, 51.2% and 22.6% respondents employed <4, 4-7, and >7 adaptation behaviors during heat waves, respectively. Individuals with moderate (OR=2.93, 95% CI: 1.38-6.22) or high (OR=10.58, 95% CI: 4.74-23.63) risk perception experienced more heatstroke in the past year than others. Drinking more water and wearing light clothes in urban areas, while decreasing activity as well as wearing light clothes in rural areas were negatively associated with heatstroke. Individuals with high risk perception and employing <4 adaptation behaviors during heat waves had the highest risks of heatstroke (OR=47.46, 95% CI: 12.82-175.73). There is a large room for improving health risk perception and adaptation capacity to heat waves among the public of Guangdong province. People with higher risk perception and fewer adaptation behaviors during heat waves may be more vulnerable to heat waves.

  11. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  12. Adaptive and Personalized Plasma Insulin Concentration Estimation for Artificial Pancreas Systems.

    PubMed

    Hajizadeh, Iman; Rashid, Mudassir; Samadi, Sediqeh; Feng, Jianyuan; Sevil, Mert; Hobbs, Nicole; Lazaro, Caterina; Maloney, Zacharie; Brandt, Rachel; Yu, Xia; Turksoy, Kamuran; Littlejohn, Elizabeth; Cengiz, Eda; Cinar, Ali

    2018-05-01

    The artificial pancreas (AP) system, a technology that automatically administers exogenous insulin in people with type 1 diabetes mellitus (T1DM) to regulate their blood glucose concentrations, necessitates the estimation of the amount of active insulin already present in the body to avoid overdosing. An adaptive and personalized plasma insulin concentration (PIC) estimator is designed in this work to accurately quantify the insulin present in the bloodstream. The proposed PIC estimation approach incorporates Hovorka's glucose-insulin model with the unscented Kalman filtering algorithm. Methods for the personalized initialization of the time-varying model parameters to individual patients for improved estimator convergence are developed. Data from 20 three-days-long closed-loop clinical experiments conducted involving subjects with T1DM are used to evaluate the proposed PIC estimation approach. The proposed methods are applied to the clinical data containing significant disturbances, such as unannounced meals and exercise, and the results demonstrate the accurate real-time estimation of the PIC with the root mean square error of 7.15 and 9.25 mU/L for the optimization-based fitted parameters and partial least squares regression-based testing parameters, respectively. The accurate real-time estimation of PIC will benefit the AP systems by preventing overdelivery of insulin when significant insulin is present in the bloodstream.

  13. Trends in study design and the statistical methods employed in a leading general medicine journal.

    PubMed

    Gosho, M; Sato, Y; Nagashima, K; Takahashi, S

    2018-02-01

    Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.

  14. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  15. Distributed parameter system coupled ARMA expansion identification and adaptive parallel IIR filtering - A unified problem statement. [Auto Regressive Moving-Average

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Balas, M. J.

    1980-01-01

    A novel interconnection of distributed parameter system (DPS) identification and adaptive filtering is presented, which culminates in a common statement of coupled autoregressive, moving-average expansion or parallel infinite impulse response configuration adaptive parameterization. The common restricted complexity filter objectives are seen as similar to the reduced-order requirements of the DPS expansion description. The interconnection presents the possibility of an exchange of problem formulations and solution approaches not yet easily addressed in the common finite dimensional lumped-parameter system context. It is concluded that the shared problems raised are nevertheless many and difficult.

  16. Assessing Wheat Traits by Spectral Reflectance: Do We Really Need to Focus on Predicted Trait-Values or Directly Identify the Elite Genotypes Group?

    PubMed Central

    Garriga, Miguel; Romero-Bravo, Sebastián; Estrada, Félix; Escobar, Alejandro; Matus, Iván A.; del Pozo, Alejandro; Astudillo, Cesar A.; Lobos, Gustavo A.

    2017-01-01

    Phenotyping, via remote and proximal sensing techniques, of the agronomic and physiological traits associated with yield potential and drought adaptation could contribute to improvements in breeding programs. In the present study, 384 genotypes of wheat (Triticum aestivum L.) were tested under fully irrigated (FI) and water stress (WS) conditions. The following traits were evaluated and assessed via spectral reflectance: Grain yield (GY), spikes per square meter (SM2), kernels per spike (KPS), thousand-kernel weight (TKW), chlorophyll content (SPAD), stem water soluble carbohydrate concentration and content (WSC and WSCC, respectively), carbon isotope discrimination (Δ13C), and leaf area index (LAI). The performances of spectral reflectance indices (SRIs), four regression algorithms (PCR, PLSR, ridge regression RR, and SVR), and three classification methods (PCA-LDA, PLS-DA, and kNN) were evaluated for the prediction of each trait. For the classification approaches, two classes were established for each trait: The lower 80% of the trait variability range (Class 1) and the remaining 20% (Class 2 or elite genotypes). Both the SRIs and regression methods performed better when data from FI and WS were combined. The traits that were best estimated by SRIs and regression methods were GY and Δ13C. For most traits and conditions, the estimations provided by RR and SVR were the same, or better than, those provided by the SRIs. PLS-DA showed the best performance among the categorical methods and, unlike the SRI and regression models, most traits were relatively well-classified within a specific hydric condition (FI or WS), proving that classification approach is an effective tool to be explored in future studies related to genotype selection. PMID:28337210

  17. Tumor regression induced by intratumor therapy with a disabled infectious single cycle (DISC) herpes simplex virus (HSV) vector, DISC/HSV/murine granulocyte-macrophage colony-stimulating factor, correlates with antigen-specific adaptive immunity.

    PubMed

    Ali, Selman A; Lynam, June; McLean, Cornelia S; Entwisle, Claire; Loudon, Peter; Rojas, José M; McArdle, Stephanie E B; Li, Geng; Mian, Shahid; Rees, Robert C

    2002-04-01

    Direct intratumor injection of a disabled infectious single cycle HSV-2 virus encoding the murine GM-CSF gene (DISC/mGM-CSF) into established murine colon carcinoma CT26 tumors induced a significant delay in tumor growth and complete tumor regression in up to 70% of animals. Pre-existing immunity to HSV did not reduce the therapeutic efficacy of DISC/mGM-CSF, and, when administered in combination with syngeneic dendritic cells, further decreased tumor growth and increased the incidence of complete tumor regression. Direct intratumor injection of DISC/mGM-CSF also inhibited the growth of CT26 tumor cells implanted on the contralateral flank or seeded into the lungs following i.v. injection of tumor cells (experimental lung metastasis). Proliferation of splenocytes in response to Con A was impaired in progressor and tumor-bearer, but not regressor, mice. A potent tumor-specific CTL response was generated from splenocytes of all mice with regressing, but not progressing tumors following in vitro peptide stimulation; this response was specific for the gp70 AH-1 peptide SPSYVYHQF and correlated with IFN-gamma, but not IL-4 cytokine production. Depletion of CD8(+) T cells from regressor splenocytes before in vitro stimulation with the relevant peptide abolished their cytolytic activity, while depletion of CD4(+) T cells only partially inhibited CTL generation. Tumor regression induced by DISC/mGM-CSF virus immunotherapy provides a unique model for evaluating the immune mechanism(s) involved in tumor rejection, upon which tumor immunotherapy regimes may be based.

  18. Vocal mechanics in Darwin's finches: correlation of beak gape and song frequency.

    PubMed

    Podos, Jeffrey; Southall, Joel A; Rossi-Santos, Marcos R

    2004-02-01

    Recent studies of vocal mechanics in songbirds have identified a functional role for the beak in sound production. The vocal tract (trachea and beak) filters harmonic overtones from sounds produced by the syrinx, and birds can fine-tune vocal tract resonance properties through changes in beak gape. In this study, we examine patterns of beak gape during song production in seven species of Darwin's finches of the Galápagos Islands. Our principal goals were to characterize the relationship between beak gape and vocal frequency during song production and to explore the possible influence therein of diversity in beak morphology and body size. Birds were audio and video recorded (at 30 frames s(-1)) as they sang in the field, and 164 song sequences were analyzed. We found that song frequency regressed significantly and positively on beak gape for 38 of 56 individuals and for all seven species examined. This finding provides broad support for a resonance model of vocal tract function in Darwin's finches. Comparison among species revealed significant variation in regression y-intercept values. Body size correlated negatively with y-intercept values, although not at a statistically significant level. We failed to detect variation in regression slopes among finch species, although the regression slopes of Darwin's finch and two North American sparrow species were found to differ. Analysis within one species (Geospiza fortis) revealed significant inter-individual variation in regression parameters; these parameters did not correlate with song frequency features or plumage scores. Our results suggest that patterns of beak use during song production were conserved during the Darwin's finch adaptive radiation, despite the evolution of substantial variation in beak morphology and body size.

  19. Assessing Wheat Traits by Spectral Reflectance: Do We Really Need to Focus on Predicted Trait-Values or Directly Identify the Elite Genotypes Group?

    PubMed

    Garriga, Miguel; Romero-Bravo, Sebastián; Estrada, Félix; Escobar, Alejandro; Matus, Iván A; Del Pozo, Alejandro; Astudillo, Cesar A; Lobos, Gustavo A

    2017-01-01

    Phenotyping, via remote and proximal sensing techniques, of the agronomic and physiological traits associated with yield potential and drought adaptation could contribute to improvements in breeding programs. In the present study, 384 genotypes of wheat ( Triticum aestivum L.) were tested under fully irrigated (FI) and water stress (WS) conditions. The following traits were evaluated and assessed via spectral reflectance: Grain yield (GY), spikes per square meter (SM2), kernels per spike (KPS), thousand-kernel weight (TKW), chlorophyll content (SPAD), stem water soluble carbohydrate concentration and content (WSC and WSCC, respectively), carbon isotope discrimination (Δ 13 C), and leaf area index (LAI). The performances of spectral reflectance indices (SRIs), four regression algorithms (PCR, PLSR, ridge regression RR, and SVR), and three classification methods (PCA-LDA, PLS-DA, and k NN) were evaluated for the prediction of each trait. For the classification approaches, two classes were established for each trait: The lower 80% of the trait variability range (Class 1) and the remaining 20% (Class 2 or elite genotypes). Both the SRIs and regression methods performed better when data from FI and WS were combined. The traits that were best estimated by SRIs and regression methods were GY and Δ 13 C. For most traits and conditions, the estimations provided by RR and SVR were the same, or better than, those provided by the SRIs. PLS-DA showed the best performance among the categorical methods and, unlike the SRI and regression models, most traits were relatively well-classified within a specific hydric condition (FI or WS), proving that classification approach is an effective tool to be explored in future studies related to genotype selection.

  20. A model-based exploration of the role of pattern generating circuits during locomotor adaptation.

    PubMed

    Marjaninejad, Ali; Finley, James M

    2016-08-01

    In this study, we used a model-based approach to explore the potential contributions of central pattern generating circuits (CPGs) during adaptation to external perturbations during locomotion. We constructed a neuromechanical modeled of locomotion using a reduced-phase CPG controller and an inverted pendulum mechanical model. Two different forms of locomotor adaptation were examined in this study: split-belt treadmill adaptation and adaptation to a unilateral, elastic force field. For each simulation, we first examined the effects of phase resetting and varying the model's initial conditions on the resulting adaptation. After evaluating the effect of phase resetting on the adaptation of step length symmetry, we examined the extent to which the results from these simple models could explain previous experimental observations. We found that adaptation of step length symmetry during split-belt treadmill walking could be reproduced using our model, but this model failed to replicate patterns of adaptation observed in response to force field perturbations. Given that spinal animal models can adapt to both of these types of perturbations, our findings suggest that there may be distinct features of pattern generating circuits that mediate each form of adaptation.

  1. Estimation of dew point temperature using neuro-fuzzy and neural network techniques

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Kim, Sungwon; Shiri, Jalal

    2013-11-01

    This study investigates the ability of two different artificial neural network (ANN) models, generalized regression neural networks model (GRNNM) and Kohonen self-organizing feature maps neural networks model (KSOFM), and two different adaptive neural fuzzy inference system (ANFIS) models, ANFIS model with sub-clustering identification (ANFIS-SC) and ANFIS model with grid partitioning identification (ANFIS-GP), for estimating daily dew point temperature. The climatic data that consisted of 8 years of daily records of air temperature, sunshine hours, wind speed, saturation vapor pressure, relative humidity, and dew point temperature from three weather stations, Daego, Pohang, and Ulsan, in South Korea were used in the study. The estimates of ANN and ANFIS models were compared according to the three different statistics, root mean square errors, mean absolute errors, and determination coefficient. Comparison results revealed that the ANFIS-SC, ANFIS-GP, and GRNNM models showed almost the same accuracy and they performed better than the KSOFM model. Results also indicated that the sunshine hours, wind speed, and saturation vapor pressure have little effect on dew point temperature. It was found that the dew point temperature could be successfully estimated by using T mean and R H variables.

  2. TU-AB-202-07: A Novel Method for Registration of Mid-Treatment PET/CT Images Under Conditions of Tumor Regression for Patients with Locally Advanced Lung Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharifi, Hoda; Department of Physics, Oakland University, Rochester, MI; Zhang, Hong

    Purpose: In PET-guided adaptive radiotherapy (RT), changes in the metabolic activity at individual voxels cannot be derived until the duringtreatment CT images are appropriately registered to pre-treatment CT images. However, deformable image registration (DIR) usually does not preserve tumor volume. This may induce errors when comparing to the target. The aim of this study was to develop a DIR-integrated mechanical modeling technique to track radiation-induced metabolic changes on PET images. Methods: Three patients with non-small cell lung cancer (NSCLC) were treated with adaptive radiotherapy under RTOG 1106. Two PET/CT image sets were acquired 2 weeks before RT and 18 fractionsmore » after the start of treatment. DIR was performed to register the during-RT CT to the pre-RT CT using a B-spline algorithm and the resultant displacements in the region of tumor were remodeled using a hybrid finite element method (FEM). Gross tumor volume (GTV) was delineated on the during-RT PET/CT image sets and deformed using the 3D deformation vector fields generated by the CT-based registrations. Metabolic tumor volume (MTV) was calculated using the pre- and during–RT image set. The quality of the PET mapping was evaluated based on the constancy of the mapped MTV and landmark comparison. Results: The B-spline-based registrations changed MTVs by 7.3%, 4.6% and −5.9% for the 3 patients and the correspondent changes for the hybrid FEM method −2.9%, 1% and 6.3%, respectively. Landmark comparisons were used to evaluate the Rigid, B-Spline, and hybrid FEM registrations with the mean errors of 10.1 ± 1.6 mm, 4.4 ± 0.4 mm, and 3.6 ± 0.4 mm for three patients. The hybrid FEM method outperforms the B-Spline-only registration for patients with tumor regression Conclusion: The hybrid FEM modeling technique improves the B-Spline registrations in tumor regions. This technique may help compare metabolic activities between two PET/CT images with regressing tumors. The author gratefully acknowledges the financial support from the National Institutes of Health Grant.« less

  3. The influence of family adaptability and cohesion on anxiety and depression of terminally ill cancer patients.

    PubMed

    Park, Young-Yoon; Jeong, Young-Jin; Lee, Junyong; Moon, Nayun; Bang, Inho; Kim, Hyunju; Yun, Kyung-Sook; Kim, Yong-I; Jeon, Tae-Hee

    2018-01-01

    This study investigated the effect of family members on terminally ill cancer patients by measuring the relationship of the presence of the family caregivers, visiting time by family and friends, and family adaptability and cohesion with patient's anxiety and depression. From June, 2016 to March, 2017, 100 terminally ill cancer patients who were admitted to a palliative care unit in Seoul, South Korea, were surveyed, and their medical records were reviewed. The Korean version of the Family Adaptability and Cohesion Evaluation Scales III and Hospital Anxiety-Depression Scale was used. Chi-square and multiple logistic regression analyses were used. The results of the chi-square analysis showed that the presence of family caregivers and family visit times did not have statistically significant effects on anxiety and depression in terminally ill cancer patients. In multiple logistic regression, when adjusted for age, sex, ECOG PS, and the monthly average income, the odds ratios (ORs) of the low family adaptability to anxiety and depression were 2.4 (1.03-5.83) and 5.4 (1.10-26.87), respectively. The OR of low family cohesion for depression was 5.4 (1.10-27.20) when adjusted for age, sex, ECOG PS, and monthly average household income. A higher family adaptability resulted in a lower degree of anxiety and depression in terminally ill cancer patients. The higher the family cohesion, the lower the degree of depression in the patient. The presence of the family caregiver and the visiting time by family and friends did not affect the patient's anxiety and depression.

  4. Regression modeling of ground-water flow

    USGS Publications Warehouse

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  5. Prediction of wastewater quality indicators at the inflow to the wastewater treatment plant using data mining methods

    NASA Astrophysics Data System (ADS)

    Szeląg, Bartosz; Barbusiński, Krzysztof; Studziński, Jan; Bartkiewicz, Lidia

    2017-11-01

    In the study, models developed using data mining methods are proposed for predicting wastewater quality indicators: biochemical and chemical oxygen demand, total suspended solids, total nitrogen and total phosphorus at the inflow to wastewater treatment plant (WWTP). The models are based on values measured in previous time steps and daily wastewater inflows. Also, independent prediction systems that can be used in case of monitoring devices malfunction are provided. Models of wastewater quality indicators were developed using MARS (multivariate adaptive regression spline) method, artificial neural networks (ANN) of the multilayer perceptron type combined with the classification model (SOM) and cascade neural networks (CNN). The lowest values of absolute and relative errors were obtained using ANN+SOM, whereas the MARS method produced the highest error values. It was shown that for the analysed WWTP it is possible to obtain continuous prediction of selected wastewater quality indicators using the two developed independent prediction systems. Such models can ensure reliable WWTP work when wastewater quality monitoring systems become inoperable, or are under maintenance.

  6. A time series model: First-order integer-valued autoregressive (INAR(1))

    NASA Astrophysics Data System (ADS)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  7. Development of probabilistic regional climate scenario in East Asia

    NASA Astrophysics Data System (ADS)

    Dairaku, K.; Ueno, G.; Ishizaki, N. N.

    2015-12-01

    Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. In order to develop probabilistic regional climate information that represents the uncertainty in climate scenario experiments in East Asia (CORDEX-EA and Japan), the probability distribution of 2m air temperature was estimated by using developed regression model. The method can be easily applicable to other regions and other physical quantities, and also to downscale to finer-scale dependent on availability of observation dataset. Probabilistic climate information in present (1969-1998) and future (2069-2098) climate was developed using CMIP3 SRES A1b scenarios 21 models and the observation data (CRU_TS3.22 & University of Delaware in CORDEX-EA, NIAES AMeDAS mesh data in Japan). The prototype of probabilistic information in CORDEX-EA and Japan represent the quantified structural uncertainties of multi-model ensemble experiments of climate change scenarios. Appropriate combination of statistical methods and optimization of climate ensemble experiments using multi-General Circulation Models (GCMs) and multi-regional climate models (RCMs) ensemble downscaling experiments are investigated.

  8. Application of Avco data analysis and prediction techniques (ADAPT) to prediction of sunspot activity

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.; Amato, R. A.

    1972-01-01

    The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.

  9. Vibration-based structural health monitoring using adaptive statistical method under varying environmental condition

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Jung, Hyung-Jo

    2014-03-01

    It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative study with conventional methods (i.e., fixed reference scheme) demonstrates the superior performance of the proposed method for structural damage detection.

  10. Is long-term particulate matter and nitrogen dioxide air pollution associated with incident monoclonal gammopathy of undetermined significance (MGUS)? An analysis of the Heinz Nixdorf Recall study.

    PubMed

    Orban, Ester; Arendt, Marina; Hennig, Frauke; Lucht, Sarah; Eisele, Lewin; Jakobs, Hermann; Dürig, Jan; Hoffmann, Barbara; Jöckel, Karl-Heinz; Moebus, Susanne

    2017-11-01

    Exposure to air pollution activates the innate immune system and influences the adaptive immune system in experimental settings. We investigated the association of residential long-term exposure to particulate matter (PM) and NO 2 air pollution with monoclonal gammopathy of undetermined significance (MGUS) as a marker of adaptive immune system activation. We used data from the baseline (2000-2003), 5-year (2006-2008) and 10-year (2011-2015) follow-up examinations of the German Heinz Nixdorf Recall cohort study of 4814 participants (45-75years). Residential exposure to PM size fractions and NO 2 was estimated by land-use regression (ESCAPE-LUR, annual mean 2008/2009) and dispersion chemistry transport models (EURAD-CTM, 3-year mean at baseline). We used logistic regression to estimate the effects of air pollutants on incident MGUS, adjusting for age, sex, education, smoking status, physical activity, and BMI. As a non-linear approach, we looked at quartiles (2-4) of the air pollutants in comparison to quartile 1. Of the 3949 participants with complete data, 100 developed MGUS during the 10-year follow-up. In the main model, only PM coarse was associated with incident MGUS (OR per IQR (1.9μg/m 3 ): 1.32, 95% CI 1.04-1.67). We further found positive associations between PM size fractions estimated by ESCAPE-LUR and incident MGUS by quartiles of exposure (OR Q4 vs Q1: PM 2.5 2.03 (1.08-3.80); PM 10 1.97 (1.05-3.67); PM coarse 1.98 (1.09-3.60)). Our results indicate that an association between long-term exposure to PM and MGUS may exist. Further epidemiologic studies are needed to corroborate this possible link. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Genetic Adaptation to Climate in White Spruce Involves Small to Moderate Allele Frequency Shifts in Functionally Diverse Genes.

    PubMed

    Hornoy, Benjamin; Pavy, Nathalie; Gérardi, Sébastien; Beaulieu, Jean; Bousquet, Jean

    2015-11-11

    Understanding the genetic basis of adaptation to climate is of paramount importance for preserving and managing genetic diversity in plants in a context of climate change. Yet, this objective has been addressed mainly in short-lived model species. Thus, expanding knowledge to nonmodel species with contrasting life histories, such as forest trees, appears necessary. To uncover the genetic basis of adaptation to climate in the widely distributed boreal conifer white spruce (Picea glauca), an environmental association study was conducted using 11,085 single nucleotide polymorphisms representing 7,819 genes, that is, approximately a quarter of the transcriptome.Linear and quadratic regressions controlling for isolation-by-distance, and the Random Forest algorithm, identified several dozen genes putatively under selection, among which 43 showed strongest signals along temperature and precipitation gradients. Most of them were related to temperature. Small to moderate shifts in allele frequencies were observed. Genes involved encompassed a wide variety of functions and processes, some of them being likely important for plant survival under biotic and abiotic environmental stresses according to expression data. Literature mining and sequence comparison also highlighted conserved sequences and functions with angiosperm homologs.Our results are consistent with theoretical predictions that local adaptation involves genes with small frequency shifts when selection is recent and gene flow among populations is high. Accordingly, genetic adaptation to climate in P. glauca appears to be complex, involving many independent and interacting gene functions, biochemical pathways, and processes. From an applied perspective, these results shall lead to specific functional/association studies in conifers and to the development of markers useful for the conservation of genetic resources. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  12. Genetic Adaptation to Climate in White Spruce Involves Small to Moderate Allele Frequency Shifts in Functionally Diverse Genes

    PubMed Central

    Hornoy, Benjamin; Pavy, Nathalie; Gérardi, Sébastien; Beaulieu, Jean; Bousquet, Jean

    2015-01-01

    Understanding the genetic basis of adaptation to climate is of paramount importance for preserving and managing genetic diversity in plants in a context of climate change. Yet, this objective has been addressed mainly in short-lived model species. Thus, expanding knowledge to nonmodel species with contrasting life histories, such as forest trees, appears necessary. To uncover the genetic basis of adaptation to climate in the widely distributed boreal conifer white spruce (Picea glauca), an environmental association study was conducted using 11,085 single nucleotide polymorphisms representing 7,819 genes, that is, approximately a quarter of the transcriptome. Linear and quadratic regressions controlling for isolation-by-distance, and the Random Forest algorithm, identified several dozen genes putatively under selection, among which 43 showed strongest signals along temperature and precipitation gradients. Most of them were related to temperature. Small to moderate shifts in allele frequencies were observed. Genes involved encompassed a wide variety of functions and processes, some of them being likely important for plant survival under biotic and abiotic environmental stresses according to expression data. Literature mining and sequence comparison also highlighted conserved sequences and functions with angiosperm homologs. Our results are consistent with theoretical predictions that local adaptation involves genes with small frequency shifts when selection is recent and gene flow among populations is high. Accordingly, genetic adaptation to climate in P. glauca appears to be complex, involving many independent and interacting gene functions, biochemical pathways, and processes. From an applied perspective, these results shall lead to specific functional/association studies in conifers and to the development of markers useful for the conservation of genetic resources. PMID:26560341

  13. Test selection, adaptation, and evaluation: a systematic approach to assess nutritional influences on child development in developing countries.

    PubMed

    Prado, Elizabeth L; Hartini, Sri; Rahmawati, Atik; Ismayani, Elfa; Hidayati, Astri; Hikmah, Nurul; Muadz, Husni; Apriatni, Mandri S; Ullman, Michael T; Shankar, Anuraj H; Alcock, Katherine J

    2010-03-01

    Evaluating the impact of nutrition interventions on developmental outcomes in developing countries can be challenging since most assessment tests have been produced in and for developed country settings. Such tests may not be valid measures of children's abilities when used in a new context. We present several principles for the selection, adaptation, and evaluation of tests assessing the developmental outcomes of nutrition interventions in developing countries where standard assessment tests do not exist. We then report the application of these principles for a nutrition trial on the Indonesian island of Lombok. Three hundred children age 22-55 months in Lombok participated in a series of pilot tests for the purpose of test adaptation and evaluation. Four hundred and eighty-seven 42-month-old children in Lombok were tested on the finalized test battery. The developmental assessment tests were adapted to the local context and evaluated for a number of psychometric properties, including convergent and discriminant validity, which were measured based on multiple regression models with maternal education, depression, and age predicting each test score. The adapted tests demonstrated satisfactory psychometric properties and the expected pattern of relationships with the three maternal variables. Maternal education significantly predicted all scores but one, maternal depression predicted socio-emotional competence, socio-emotional problems, and vocabulary, while maternal age predicted socio-emotional competence only. Following the methodological principles we present resulted in tests that were appropriate for children in Lombok and informative for evaluating the developmental outcomes of nutritional supplementation in the research context. Following this approach in future studies will help to determine which interventions most effectively improve child development in developing countries.

  14. The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…

  15. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  16. Personality and risk perception in transport.

    PubMed

    Fyhri, Aslak; Backer-Grøndahl, Agathe

    2012-11-01

    Within research on individual variations in risk perception, personality has been suggested as one important factor. In the present study, personality traits (44 items from the Big Five inventory) were investigated in relation to risk perception in transport and transport behavioural adaptations. In a sample of 312 participants, we found that the personality trait 'emotional stability versus neuroticism' was negatively correlated with risk perception (operationalised as "thinking about the possibility") of an accident (-0.38) and an unpleasant incident, such as crime, violence, robbery (-0.25). 'Agreeableness' was also negatively related to risk perception, however first and foremost in relation to perceived risk for unpleasant incidents on transport modes in which one interacts with other people (0.25). Moreover, regression analyses showed that 'emotional stability' was a significant predictor of behavioural adaptations on bus. Regression analyses explained between 17 and 26 percent of variance in behavioural adaptations. The results show that different groups of people vary systematically in their perception of risk in transport. Furthermore, these differences are manifest as a difference in risk-preventive behaviour at a strategic level, i.e. as decisions about avoiding risky situations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Dreams Fulfilled and Shattered: Determinants of Segmented Assimilation in the Second Generation*

    PubMed Central

    Haller, William; Portes, Alejandro; Lynch, Scott M.

    2013-01-01

    We summarize prior theories on the adaptation process of the contemporary immigrant second generation as a prelude to presenting additive and interactive models showing the impact of family variables, school contexts and academic outcomes on the process. For this purpose, we regress indicators of educational and occupational achievement in early adulthood on predictors measured three and six years earlier. The Children of Immigrants Longitudinal Study (CILS), used for the analysis, allows us to establish a clear temporal order among exogenous predictors and the two dependent variables. We also construct a Downward Assimilation Index (DAI), based on six indicators and regress it on the same set of predictors. Results confirm a pattern of segmented assimilation in the second generation, with a significant proportion of the sample experiencing downward assimilation. Predictors of the latter are the obverse of those of educational and occupational achievement. Significant interaction effects emerge between these predictors and early school contexts, defined by different class and racial compositions. Implications of these results for theory and policy are examined. PMID:24223437

  18. Estimation of the daily global solar radiation based on the Gaussian process regression methodology in the Saharan climate

    NASA Astrophysics Data System (ADS)

    Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said

    2018-06-01

    Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.

  19. Peritraumatic tonic immobility is associated with posttraumatic stress symptoms in undergraduate Brazilian students.

    PubMed

    Portugal, Liana Catarina L; Pereira, Mirtes Garcia; Alves, Rita de Cássia S; Tavares, Gisella; Lobo, Isabela; Rocha-Rego, Vanessa; Marques-Portella, Carla; Mendlowicz, Mauro V; Coutinho, Evandro S; Fiszman, Adriana; Volchan, Eliane; Figueira, Ivan; Oliveira, Letícia de

    2012-03-01

    Tonic immobility is a defensive reaction occurring under extreme life threats. Patients with posttraumatic stress disorder (PTSD) reporting peritraumatic tonic immobility show the most severe symptoms and a poorer response to treatment. This study investigated the predictive value of tonic immobility for posttraumatic stress symptoms in a non-clinical sample. One hundred and ninety-eight college students exposed to various life threatening events were selected to participate. The Posttraumatic Stress Disorder Checklist - Civilian Version (PCL-C) and tonic immobility questions were used. Linear regression models were fitted to investigate the association between peritraumatic tonic immobility and PCL-C scores. Peritraumatic dissociation, peritraumatic panic reactions, negative affect, gender, type of trauma, and time since trauma were considered as confounding variables. We found significant association between peritraumatic tonic immobility and PTSD symptoms in a non-clinical sample exposed to various traumas, even after regression controlled for confounding variables (β = 1.99, p = 0.017). This automatic reaction under extreme life threatening stress, although adaptive for defense, may have pathological consequences as implied by its association with PTSD symptoms.

  20. Moderation analysis using a two-level regression model.

    PubMed

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

Top