Zhu, Yu; Xia, Jie-lai; Wang, Jing
2009-09-01
Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.
Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan
2013-01-01
The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…
Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc
2015-08-01
The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Regression rate behaviors of HTPB-based propellant combinations for hybrid rocket motor
NASA Astrophysics Data System (ADS)
Sun, Xingliang; Tian, Hui; Li, Yuelong; Yu, Nanjia; Cai, Guobiao
2016-02-01
The purpose of this paper is to characterize the regression rate behavior of hybrid rocket motor propellant combinations, using hydrogen peroxide (HP), gaseous oxygen (GOX), nitrous oxide (N2O) as the oxidizer and hydroxyl-terminated poly-butadiene (HTPB) as the based fuel. In order to complete this research by experiment and simulation, a hybrid rocket motor test system and a numerical simulation model are established. Series of hybrid rocket motor firing tests are conducted burning different propellant combinations, and several of those are used as references for numerical simulations. The numerical simulation model is developed by combining the Navies-Stokes equations with the turbulence model, one-step global reaction model, and solid-gas coupling model. The distribution of regression rate along the axis is determined by applying simulation mode to predict the combustion process and heat transfer inside the hybrid rocket motor. The time-space averaged regression rate has a good agreement between the numerical value and experimental data. The results indicate that the N2O/HTPB and GOX/HTPB propellant combinations have a higher regression rate, since the enhancement effect of latter is significant due to its higher flame temperature. Furthermore, the containing of aluminum (Al) and/or ammonium perchlorate(AP) in the grain does enhance the regression rate, mainly due to the more energy released inside the chamber and heat feedback to the grain surface by the aluminum combustion.
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
NASA Technical Reports Server (NTRS)
MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.
2005-01-01
Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.
Feng, Yongjiu; Tong, Xiaohua
2017-09-22
Defining transition rules is an important issue in cellular automaton (CA)-based land use modeling because these models incorporate highly correlated driving factors. Multicollinearity among correlated driving factors may produce negative effects that must be eliminated from the modeling. Using exploratory regression under pre-defined criteria, we identified all possible combinations of factors from the candidate factors affecting land use change. Three combinations that incorporate five driving factors meeting pre-defined criteria were assessed. With the selected combinations of factors, three logistic regression-based CA models were built to simulate dynamic land use change in Shanghai, China, from 2000 to 2015. For comparative purposes, a CA model with all candidate factors was also applied to simulate the land use change. Simulations using three CA models with multicollinearity eliminated performed better (with accuracy improvements about 3.6%) than the model incorporating all candidate factors. Our results showed that not all candidate factors are necessary for accurate CA modeling and the simulations were not sensitive to changes in statistically non-significant driving factors. We conclude that exploratory regression is an effective method to search for the optimal combinations of driving factors, leading to better land use change models that are devoid of multicollinearity. We suggest identification of dominant factors and elimination of multicollinearity before building land change models, making it possible to simulate more realistic outcomes.
Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C
2018-06-29
A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.
Cumulative Risk and Impact Modeling on Environmental Chemical and Social Stressors.
Huang, Hongtai; Wang, Aolin; Morello-Frosch, Rachel; Lam, Juleen; Sirota, Marina; Padula, Amy; Woodruff, Tracey J
2018-03-01
The goal of this review is to identify cumulative modeling methods used to evaluate combined effects of exposures to environmental chemicals and social stressors. The specific review question is: What are the existing quantitative methods used to examine the cumulative impacts of exposures to environmental chemical and social stressors on health? There has been an increase in literature that evaluates combined effects of exposures to environmental chemicals and social stressors on health using regression models; very few studies applied other data mining and machine learning techniques to this problem. The majority of studies we identified used regression models to evaluate combined effects of multiple environmental and social stressors. With proper study design and appropriate modeling assumptions, additional data mining methods may be useful to examine combined effects of environmental and social stressors.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
Shi, J Q; Wang, B; Will, E J; West, R M
2012-11-20
We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.
Evaluation of trends in wheat yield models
NASA Technical Reports Server (NTRS)
Ferguson, M. C.
1982-01-01
Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.
Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...
Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang
2014-01-01
In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454
Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan
2014-01-01
In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.
Ensemble habitat mapping of invasive plant species
Stohlgren, T.J.; Ma, P.; Kumar, S.; Rocca, M.; Morisette, J.T.; Jarnevich, C.S.; Benson, N.
2010-01-01
Ensemble species distribution models combine the strengths of several species environmental matching models, while minimizing the weakness of any one model. Ensemble models may be particularly useful in risk analysis of recently arrived, harmful invasive species because species may not yet have spread to all suitable habitats, leaving species-environment relationships difficult to determine. We tested five individual models (logistic regression, boosted regression trees, random forest, multivariate adaptive regression splines (MARS), and maximum entropy model or Maxent) and ensemble modeling for selected nonnative plant species in Yellowstone and Grand Teton National Parks, Wyoming; Sequoia and Kings Canyon National Parks, California, and areas of interior Alaska. The models are based on field data provided by the park staffs, combined with topographic, climatic, and vegetation predictors derived from satellite data. For the four invasive plant species tested, ensemble models were the only models that ranked in the top three models for both field validation and test data. Ensemble models may be more robust than individual species-environment matching models for risk analysis. ?? 2010 Society for Risk Analysis.
2015-07-15
Long-term effects on cancer survivors’ quality of life of physical training versus physical training combined with cognitive-behavioral therapy ...COMPARISON OF NEURAL NETWORK AND LINEAR REGRESSION MODELS IN STATISTICALLY PREDICTING MENTAL AND PHYSICAL HEALTH STATUS OF BREAST...34Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors
Plant, soil, and shadow reflectance components of row crops
NASA Technical Reports Server (NTRS)
Richardson, A. J.; Wiegand, C. L.; Gausman, H. W.; Cuellar, J. A.; Gerbermann, A. H.
1975-01-01
Data from the first Earth Resource Technology Satellite (LANDSAT-1) multispectral scanner (MSS) were used to develop three plant canopy models (Kubelka-Munk (K-M), regression, and combined K-M and regression models) for extracting plant, soil, and shadow reflectance components of cropped fields. The combined model gave the best correlation between MSS data and ground truth, by accounting for essentially all of the reflectance of plants, soil, and shadow between crop rows. The principles presented can be used to better forecast crop yield and to estimate acreage.
Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N
2017-07-01
Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.
Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K
2011-10-01
To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods including only regression or both regression and ranking constraints on clinical data. On high dimensional data, the former model performs better. However, this approach does not have a theoretical link with standard statistical models for survival data. This link can be made by means of transformation models when ranking constraints are included. Copyright © 2011 Elsevier B.V. All rights reserved.
Deep ensemble learning of sparse regression models for brain disease diagnosis.
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2017-04-01
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Deep ensemble learning of sparse regression models for brain disease diagnosis
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2018-01-01
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer’s disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call ‘ Deep Ensemble Sparse Regression Network.’ To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. PMID:28167394
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Regional regression models of watershed suspended-sediment discharge for the eastern United States
NASA Astrophysics Data System (ADS)
Roman, David C.; Vogel, Richard M.; Schwarz, Gregory E.
2012-11-01
SummaryEstimates of mean annual watershed sediment discharge, derived from long-term measurements of suspended-sediment concentration and streamflow, often are not available at locations of interest. The goal of this study was to develop multivariate regression models to enable prediction of mean annual suspended-sediment discharge from available basin characteristics useful for most ungaged river locations in the eastern United States. The models are based on long-term mean sediment discharge estimates and explanatory variables obtained from a combined dataset of 1201 US Geological Survey (USGS) stations derived from a SPAtially Referenced Regression on Watershed attributes (SPARROW) study and the Geospatial Attributes of Gages for Evaluating Streamflow (GAGES) database. The resulting regional regression models summarized for major US water resources regions 1-8, exhibited prediction R2 values ranging from 76.9% to 92.7% and corresponding average model prediction errors ranging from 56.5% to 124.3%. Results from cross-validation experiments suggest that a majority of the models will perform similarly to calibration runs. The 36-parameter regional regression models also outperformed a 16-parameter national SPARROW model of suspended-sediment discharge and indicate that mean annual sediment loads in the eastern United States generally correlates with a combination of basin area, land use patterns, seasonal precipitation, soil composition, hydrologic modification, and to a lesser extent, topography.
Regional regression models of watershed suspended-sediment discharge for the eastern United States
Roman, David C.; Vogel, Richard M.; Schwarz, Gregory E.
2012-01-01
Estimates of mean annual watershed sediment discharge, derived from long-term measurements of suspended-sediment concentration and streamflow, often are not available at locations of interest. The goal of this study was to develop multivariate regression models to enable prediction of mean annual suspended-sediment discharge from available basin characteristics useful for most ungaged river locations in the eastern United States. The models are based on long-term mean sediment discharge estimates and explanatory variables obtained from a combined dataset of 1201 US Geological Survey (USGS) stations derived from a SPAtially Referenced Regression on Watershed attributes (SPARROW) study and the Geospatial Attributes of Gages for Evaluating Streamflow (GAGES) database. The resulting regional regression models summarized for major US water resources regions 1–8, exhibited prediction R2 values ranging from 76.9% to 92.7% and corresponding average model prediction errors ranging from 56.5% to 124.3%. Results from cross-validation experiments suggest that a majority of the models will perform similarly to calibration runs. The 36-parameter regional regression models also outperformed a 16-parameter national SPARROW model of suspended-sediment discharge and indicate that mean annual sediment loads in the eastern United States generally correlates with a combination of basin area, land use patterns, seasonal precipitation, soil composition, hydrologic modification, and to a lesser extent, topography.
An Update on Statistical Boosting in Biomedicine.
Mayr, Andreas; Hofner, Benjamin; Waldmann, Elisabeth; Hepp, Tobias; Meyer, Sebastian; Gefeller, Olaf
2017-01-01
Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression, and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine.
Mapping soil textural fractions across a large watershed in north-east Florida.
Lamsal, S; Mishra, U
2010-08-01
Assessment of regional scale soil spatial variation and mapping their distribution is constrained by sparse data which are collected using field surveys that are labor intensive and cost prohibitive. We explored geostatistical (ordinary kriging-OK), regression (Regression Tree-RT), and hybrid methods (RT plus residual Sequential Gaussian Simulation-SGS) to map soil textural fractions across the Santa Fe River Watershed (3585 km(2)) in north-east Florida. Soil samples collected from four depths (L1: 0-30 cm, L2: 30-60 cm, L3: 60-120 cm, and L4: 120-180 cm) at 141 locations were analyzed for soil textural fractions (sand, silt and clay contents), and combined with textural data (15 profiles) assembled under the Florida Soil Characterization program. Textural fractions in L1 and L2 were autocorrelated, and spatially mapped across the watershed. OK performance was poor, which may be attributed to the sparse sampling. RT model structure varied among textural fractions, and the model explained variations ranged from 25% for L1 silt to 61% for L2 clay content. Regression residuals were simulated using SGS, and the average of simulated residuals were used to approximate regression residual distribution map, which were added to regression trend maps. Independent validation of the prediction maps showed that regression models performed slightly better than OK, and regression combined with average of simulated regression residuals improved predictions beyond the regression model. Sand content >90% in both 0-30 and 30-60 cm covered 80.6% of the watershed area. Copyright 2010 Elsevier Ltd. All rights reserved.
An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.
ERIC Educational Resources Information Center
Bockenholt, Ulf
1999-01-01
Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…
Gurnani, Ashita S; John, Samantha E; Gavett, Brandon E
2015-05-01
The current study developed regression-based normative adjustments for a bi-factor model of the The Brief Test of Adult Cognition by Telephone (BTACT). Archival data from the Midlife Development in the United States-II Cognitive Project were used to develop eight separate linear regression models that predicted bi-factor BTACT scores, accounting for age, education, gender, and occupation-alone and in various combinations. All regression models provided statistically significant fit to the data. A three-predictor regression model fit best and accounted for 32.8% of the variance in the global bi-factor BTACT score. The fit of the regression models was not improved by gender. Eight different regression models are presented to allow the user flexibility in applying demographic corrections to the bi-factor BTACT scores. Occupation corrections, while not widely used, may provide useful demographic adjustments for adult populations or for those individuals who have attained an occupational status not commensurate with expected educational attainment. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.
Modeling Complex Phenomena Using Multiscale Time Sequences
2009-08-24
measures based on Hurst and Holder exponents , auto-regressive methods and Fourier and wavelet decomposition methods. The applications for this technology...relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and Holder exponents , auto-regressive...different scales and how these scales relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and
Kempe, P T; van Oppen, P; de Haan, E; Twisk, J W R; Sluis, A; Smit, J H; van Dyck, R; van Balkom, A J L M
2007-09-01
Two methods for predicting remissions in obsessive-compulsive disorder (OCD) treatment are evaluated. Y-BOCS measurements of 88 patients with a primary OCD (DSM-III-R) diagnosis were performed over a 16-week treatment period, and during three follow-ups. Remission at any measurement was defined as a Y-BOCS score lower than thirteen combined with a reduction of seven points when compared with baseline. Logistic regression models were compared with a Cox regression for recurrent events model. Logistic regression yielded different models at different evaluation times. The recurrent events model remained stable when fewer measurements were used. Higher baseline levels of neuroticism and more severe OCD symptoms were associated with a lower chance of remission, early age of onset and more depressive symptoms with a higher chance. Choice of outcome time affects logistic regression prediction models. Recurrent events analysis uses all information on remissions and relapses. Short- and long-term predictors for OCD remission show overlap.
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
Likhvantseva, V G; Sokolov, V A; Levanova, O N; Kovelenova, I V
2018-01-01
Prediction of the clinical course of primary open-angle glaucoma (POAG) is one of the main directions in solving the problem of vision loss prevention and stabilization of the pathological process. Simple statistical methods of correlation analysis show the extent of each risk factor's impact, but do not indicate the total impact of these factors in personalized combinations. The relationships between the risk factors is subject to correlation and regression analysis. The regression equation represents the dependence of the mathematical expectation of the resulting sign on the combination of factor signs. To develop a technique for predicting the probability of development and progression of primary open-angle glaucoma based on a personalized combination of risk factors by linear multivariate regression analysis. The study included 66 patients (23 female and 43 male; 132 eyes) with newly diagnosed primary open-angle glaucoma. The control group consisted of 14 patients (8 male and 6 female). Standard ophthalmic examination was supplemented with biochemical study of lacrimal fluid. Concentration of matrix metalloproteinase MMP-2 and MMP-9 in tear fluid in both eyes was determined using 'sandwich' enzyme-linked immunosorbent assay (ELISA) method. The study resulted in the development of regression equations and step-by-step multivariate logistic models that can help calculate the risk of development and progression of POAG. Those models are based on expert evaluation of clinical and instrumental indicators of hydrodynamic disturbances (coefficient of outflow ease - C, volume of intraocular fluid secretion - F, fluctuation of intraocular pressure), as well as personalized morphometric parameters of the retina (central retinal thickness in the macular area) and concentration of MMP-2 and MMP-9 in the tear film. The newly developed regression equations are highly informative and can be a reliable tool for studying of the influence vector and assessment of pathogenic potential of the independent risk factors in specific personalized combinations.
Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis
NASA Astrophysics Data System (ADS)
Mohamed Ismael, Hawa; Vandyck, George Kobina
The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.
2003-01-01
Logistic regression was used to predict the probability of debris flows occurring in areas recently burned by wildland fires. Multiple logistic regression is conceptually similar to multiple linear regression because statistical relations between one dependent variable and several independent variables are evaluated. In logistic regression, however, the dependent variable is transformed to a binary variable (debris flow did or did not occur), and the actual probability of the debris flow occurring is statistically modeled. Data from 399 basins located within 15 wildland fires that burned during 2000-2002 in Colorado, Idaho, Montana, and New Mexico were evaluated. More than 35 independent variables describing the burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows were delineated from National Elevation Data using a Geographic Information System (GIS). (2) Data describing the burn severity, geology, land surface gradient, rainfall, and soil properties were determined for each basin. These data were then downloaded to a statistics software package for analysis using logistic regression. (3) Relations between the occurrence/non-occurrence of debris flows and burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated and several preliminary multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combination produced the most effective model. The multivariate model that best predicted the occurrence of debris flows was selected. (4) The multivariate logistic regression model was entered into a GIS, and a map showing the probability of debris flows was constructed. The most effective model incorporates the percentage of each basin with slope greater than 30 percent, percentage of land burned at medium and high burn severity in each basin, particle size sorting, average storm intensity (millimeters per hour), soil organic matter content, soil permeability, and soil drainage. The results of this study demonstrate that logistic regression is a valuable tool for predicting the probability of debris flows occurring in recently-burned landscapes.
Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T
2013-12-11
The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.
Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
2013-01-01
Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.
Bias and uncertainty of δ13CO2 isotopic mixing models
Zachary E. Kayler; Lisa Ganio; Mark Hauck; Thomas G. Pypker; Elizabeth W. Sulzman; Alan C. Mix; Barbara J. Bond
2009-01-01
The goal of this study was to evaluate how factorial combinations of two mixing models and two regression approaches (Keeling-OLS, MillerâTans-OLS, Keeling-GMR, MillerâTans-GMR) compare in small [CO2] range versus large[CO2] range regimes, with different combinations of...
OPC modeling by genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.
2005-05-01
Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
NASA Astrophysics Data System (ADS)
Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.
2013-06-01
This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.
Combining Relevance Vector Machines and exponential regression for bearing residual life estimation
NASA Astrophysics Data System (ADS)
Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico
2012-08-01
In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
Categorical Variables in Multiple Regression: Some Cautions.
ERIC Educational Resources Information Center
O'Grady, Kevin E.; Medoff, Deborah R.
1988-01-01
Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)
Machine learning in updating predictive models of planning and scheduling transportation projects
DOT National Transportation Integrated Search
1997-01-01
A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...
NASA Astrophysics Data System (ADS)
Li, Tao
2018-06-01
The complexity of aluminum electrolysis process leads the temperature for aluminum reduction cells hard to measure directly. However, temperature is the control center of aluminum production. To solve this problem, combining some aluminum plant's practice data, this paper presents a Soft-sensing model of temperature for aluminum electrolysis process on Improved Twin Support Vector Regression (ITSVR). ITSVR eliminates the slow learning speed of Support Vector Regression (SVR) and the over-fit risk of Twin Support Vector Regression (TSVR) by introducing a regularization term into the objective function of TSVR, which ensures the structural risk minimization principle and lower computational complexity. Finally, the model with some other parameters as auxiliary variable, predicts the temperature by ITSVR. The simulation result shows Soft-sensing model based on ITSVR has short time-consuming and better generalization.
Forecasting space weather: Can new econometric methods improve accuracy?
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2011-06-01
Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.; Michael, John A.; Helsel, Dennis R.
2008-01-01
Logistic regression was used to develop statistical models that can be used to predict the probability of debris flows in areas recently burned by wildfires by using data from 14 wildfires that burned in southern California during 2003-2006. Twenty-eight independent variables describing the basin morphology, burn severity, rainfall, and soil properties of 306 drainage basins located within those burned areas were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows soon after the 2003 to 2006 fires were delineated from data in the National Elevation Dataset using a geographic information system; (2) Data describing the basin morphology, burn severity, rainfall, and soil properties were compiled for each basin. These data were then input to a statistics software package for analysis using logistic regression; and (3) Relations between the occurrence or absence of debris flows and the basin morphology, burn severity, rainfall, and soil properties were evaluated, and five multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combinations produced the most effective models, and the multivariate models that best predicted the occurrence of debris flows were identified. Percentage of high burn severity and 3-hour peak rainfall intensity were significant variables in all models. Soil organic matter content and soil clay content were significant variables in all models except Model 5. Soil slope was a significant variable in all models except Model 4. The most suitable model can be selected from these five models on the basis of the availability of independent variables in the particular area of interest and field checking of probability maps. The multivariate logistic regression models can be entered into a geographic information system, and maps showing the probability of debris flows can be constructed in recently burned areas of southern California. This study demonstrates that logistic regression is a valuable tool for developing models that predict the probability of debris flows occurring in recently burned landscapes.
Prediction of dynamical systems by symbolic regression
NASA Astrophysics Data System (ADS)
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.
Sills, Deborah L; Gossett, James M
2012-04-01
Fourier transform infrared, attenuated total reflectance (FTIR-ATR) spectroscopy, combined with partial least squares (PLS) regression, accurately predicted solubilization of plant cell wall constituents and NaOH consumption through pretreatment, and overall sugar productions from combined pretreatment and enzymatic hydrolysis. PLS regression models were constructed by correlating FTIR spectra of six raw biomasses (two switchgrass cultivars, big bluestem grass, a low-impact, high-diversity mixture of prairie biomasses, mixed hardwood, and corn stover), plus alkali loading in pretreatment, to nine dependent variables: glucose, xylose, lignin, and total solids solubilized in pretreatment; NaOH consumed in pretreatment; and overall glucose and xylose conversions and yields from combined pretreatment and enzymatic hydrolysis. PLS models predicted the dependent variables with the following values of coefficient of determination for cross-validation (Q²): 0.86 for glucose, 0.90 for xylose, 0.79 for lignin, and 0.85 for total solids solubilized in pretreatment; 0.83 for alkali consumption; 0.93 for glucose conversion, 0.94 for xylose conversion, and 0.88 for glucose and xylose yields. The sugar yield models are noteworthy for their ability to predict overall saccharification through combined pretreatment and enzymatic hydrolysis per mass dry untreated solids without a priori knowledge of the composition of solids. All wavenumbers with significant variable-important-for-projection (VIP) scores have been attributed to chemical features of lignocellulose, demonstrating the models were based on real chemical information. These models suggest that PLS regression can be applied to FTIR-ATR spectra of raw biomasses to rapidly predict effects of pretreatment on solids and on subsequent enzymatic hydrolysis. Copyright © 2011 Wiley Periodicals, Inc.
Hemmila, April; McGill, Jim; Ritter, David
2008-03-01
To determine if changes in fingerprint infrared spectra linear with age can be found, partial least squares (PLS1) regression of 155 fingerprint infrared spectra against the person's age was constructed. The regression produced a linear model of age as a function of spectrum with a root mean square error of calibration of less than 4 years, showing an inflection at about 25 years of age. The spectral ranges emphasized by the regression do not correspond to the highest concentration constituents of the fingerprints. Separate linear regression models for old and young people can be constructed with even more statistical rigor. The success of the regression demonstrates that a combination of constituents can be found that changes linearly with age, with a significant shift around puberty.
Guenole, Nigel; Brown, Anna
2014-01-01
We report a Monte Carlo study examining the effects of two strategies for handling measurement non-invariance – modeling and ignoring non-invariant items – on structural regression coefficients between latent variables measured with item response theory models for categorical indicators. These strategies were examined across four levels and three types of non-invariance – non-invariant loadings, non-invariant thresholds, and combined non-invariance on loadings and thresholds – in simple, partial, mediated and moderated regression models where the non-invariant latent variable occupied predictor, mediator, and criterion positions in the structural regression models. When non-invariance is ignored in the latent predictor, the focal group regression parameters are biased in the opposite direction to the difference in loadings and thresholds relative to the referent group (i.e., lower loadings and thresholds for the focal group lead to overestimated regression parameters). With criterion non-invariance, the focal group regression parameters are biased in the same direction as the difference in loadings and thresholds relative to the referent group. While unacceptable levels of parameter bias were confined to the focal group, bias occurred at considerably lower levels of ignored non-invariance than was previously recognized in referent and focal groups. PMID:25278911
How Robust Is Linear Regression with Dummy Variables?
ERIC Educational Resources Information Center
Blankmeyer, Eric
2006-01-01
Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.…
Detection of epistatic effects with logic regression and a classical linear regression model.
Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata
2014-02-01
To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.
NASA Astrophysics Data System (ADS)
Shao, G.; Gallion, J.; Fei, S.
2016-12-01
Sound forest aboveground biomass estimation is required to monitor diverse forest ecosystems and their impacts on the changing climate. Lidar-based regression models provided promised biomass estimations in most forest ecosystems. However, considerable uncertainties of biomass estimations have been reported in the temperate hardwood and hardwood-dominated mixed forests. Varied site productivities in temperate hardwood forests largely diversified height and diameter growth rates, which significantly reduced the correlation between tree height and diameter at breast height (DBH) in mature and complex forests. It is, therefore, difficult to utilize height-based lidar metrics to predict DBH-based field-measured biomass through a simple regression model regardless the variation of site productivity. In this study, we established a multi-dimension nonlinear regression model incorporating lidar metrics and site productivity classes derived from soil features. In the regression model, lidar metrics provided horizontal and vertical structural information and productivity classes differentiated good and poor forest sites. The selection and combination of lidar metrics were discussed. Multiple regression models were employed and compared. Uncertainty analysis was applied to the best fit model. The effects of site productivity on the lidar-based biomass model were addressed.
Nishii, Takashi; Genkawa, Takuma; Watari, Masahiro; Ozaki, Yukihiro
2012-01-01
A new selection procedure of an informative near-infrared (NIR) region for regression model building is proposed that uses an online NIR/mid-infrared (mid-IR) dual-region spectrometer in conjunction with two-dimensional (2D) NIR/mid-IR heterospectral correlation spectroscopy. In this procedure, both NIR and mid-IR spectra of a liquid sample are acquired sequentially during a reaction process using the NIR/mid-IR dual-region spectrometer; the 2D NIR/mid-IR heterospectral correlation spectrum is subsequently calculated from the obtained spectral data set. From the calculated 2D spectrum, a NIR region is selected that includes bands of high positive correlation intensity with mid-IR bands assigned to the analyte, and used for the construction of a regression model. To evaluate the performance of this procedure, a partial least-squares (PLS) regression model of the ethanol concentration in a fermentation process was constructed. During fermentation, NIR/mid-IR spectra in the 10000 - 1200 cm(-1) region were acquired every 3 min, and a 2D NIR/mid-IR heterospectral correlation spectrum was calculated to investigate the correlation intensity between the NIR and mid-IR bands. NIR regions that include bands at 4343, 4416, 5778, 5904, and 5955 cm(-1), which result from the combinations and overtones of the C-H group of ethanol, were selected for use in the PLS regression models, by taking the correlation intensity of a mid-IR band at 2985 cm(-1) arising from the CH(3) asymmetric stretching vibration mode of ethanol as a reference. The predicted results indicate that the ethanol concentrations calculated from the PLS regression models fit well to those obtained by high-performance liquid chromatography. Thus, it can be concluded that the selection procedure using the NIR/mid-IR dual-region spectrometer combined with 2D NIR/mid-IR heterospectral correlation spectroscopy is a powerful method for the construction of a reliable regression model.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Kabeshova, A; Annweiler, C; Fantino, B; Philip, T; Gromov, V A; Launay, C P; Beauchet, O
2014-06-01
Regression tree (RT) analyses are particularly adapted to explore the risk of recurrent falling according to various combinations of fall risk factors compared to logistic regression models. The aims of this study were (1) to determine which combinations of fall risk factors were associated with the occurrence of recurrent falls in older community-dwellers, and (2) to compare the efficacy of RT and multiple logistic regression model for the identification of recurrent falls. A total of 1,760 community-dwelling volunteers (mean age ± standard deviation, 71.0 ± 5.1 years; 49.4 % female) were recruited prospectively in this cross-sectional study. Age, gender, polypharmacy, use of psychoactive drugs, fear of falling (FOF), cognitive disorders and sad mood were recorded. In addition, the history of falls within the past year was recorded using a standardized questionnaire. Among 1,760 participants, 19.7 % (n = 346) were recurrent fallers. The RT identified 14 nodes groups and 8 end nodes with FOF as the first major split. Among participants with FOF, those who had sad mood and polypharmacy formed the end node with the greatest OR for recurrent falls (OR = 6.06 with p < 0.001). Among participants without FOF, those who were male and not sad had the lowest OR for recurrent falls (OR = 0.25 with p < 0.001). The RT correctly classified 1,356 from 1,414 non-recurrent fallers (specificity = 95.6 %), and 65 from 346 recurrent fallers (sensitivity = 18.8 %). The overall classification accuracy was 81.0 %. The multiple logistic regression correctly classified 1,372 from 1,414 non-recurrent fallers (specificity = 97.0 %), and 61 from 346 recurrent fallers (sensitivity = 17.6 %). The overall classification accuracy was 81.4 %. Our results show that RT may identify specific combinations of risk factors for recurrent falls, the combination most associated with recurrent falls involving FOF, sad mood and polypharmacy. The FOF emerged as the risk factor strongly associated with recurrent falls. In addition, RT and multiple logistic regression were not sensitive enough to identify the majority of recurrent fallers but appeared efficient in detecting individuals not at risk of recurrent falls.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Hao, Yong; Sun, Xu-Dong; Yang, Qiang
2012-12-01
Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina
2014-03-01
Accurate assessment of a woman's risk to develop specific subtypes of breast cancer is critical for appropriate utilization of chemopreventative measures, such as with tamoxifen in preventing estrogen-receptor positive breast cancer. In this context, we investigate quantitative measures of breast density and parenchymal texture, measures of glandular tissue content and tissue structure, as risk factors for estrogen-receptor positive (ER+) breast cancer. Mediolateral oblique (MLO) view digital mammograms of the contralateral breast from 106 women with unilateral invasive breast cancer were retrospectively analyzed. Breast density and parenchymal texture were analyzed via fully-automated software. Logistic regression with feature selection and was performed to predict ER+ versus ER- cancer status. A combined model considering all imaging measures extracted was compared to baseline models consisting of density-alone and texture-alone features. Area under the curve (AUC) of the receiver operating characteristic (ROC) and Delong's test were used to compare the models' discriminatory capacity for receptor status. The density-alone model had a discriminatory capacity of 0.62 AUC (p=0.05). The texture-alone model had a higher discriminatory capacity of 0.70 AUC (p=0.001), which was not significantly different compared to the density-alone model (p=0.37). In contrast the combined density-texture logistic regression model had a discriminatory capacity of 0.82 AUC (p<0.001), which was statistically significantly higher than both the density-alone (p<0.001) and texture-alone regression models (p=0.04). The combination of breast density and texture measures may have the potential to identify women specifically at risk for estrogen-receptor positive breast cancer and could be useful in triaging women into appropriate risk-reduction strategies.
Aylward, Lesa L; Brunet, Robert C; Starr, Thomas B; Carrier, Gaétan; Delzell, Elizabeth; Cheng, Hong; Beall, Colleen
2005-08-01
Recent studies demonstrating a concentration dependence of elimination of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) suggest that previous estimates of exposure for occupationally exposed cohorts may have underestimated actual exposure, resulting in a potential overestimate of the carcinogenic potency of TCDD in humans based on the mortality data for these cohorts. Using a database on U.S. chemical manufacturing workers potentially exposed to TCDD compiled by the National Institute for Occupational Safety and Health (NIOSH), we evaluated the impact of using a concentration- and age-dependent elimination model (CADM) (Aylward et al., 2005) on estimates of serum lipid area under the curve (AUC) for the NIOSH cohort. These data were used previously by Steenland et al. (2001) in combination with a first-order elimination model with an 8.7-year half-life to estimate cumulative serum lipid concentration (equivalent to AUC) for these workers for use in cancer dose-response assessment. Serum lipid TCDD measurements taken in 1988 for a subset of the cohort were combined with the NIOSH job exposure matrix and work histories to estimate dose rates per unit of exposure score. We evaluated the effect of choices in regression model (regression on untransformed vs. ln-transformed data and inclusion of a nonzero regression intercept) as well as the impact of choices of elimination models and parameters on estimated AUCs for the cohort. Central estimates for dose rate parameters derived from the serum-sampled subcohort were applied with the elimination models to time-specific exposure scores for the entire cohort to generate AUC estimates for all cohort members. Use of the CADM resulted in improved model fits to the serum sampling data compared to the first-order models. Dose rates varied by a factor of 50 among different combinations of elimination model, parameter sets, and regression models. Use of a CADM results in increases of up to five-fold in AUC estimates for the more highly exposed members of the cohort compared to estimates obtained using the first-order model with 8.7-year half-life. This degree of variation in the AUC estimates for this cohort would affect substantially the cancer potency estimates derived from the mortality data from this cohort. Such variability and uncertainty in the reconstructed serum lipid AUC estimates for this cohort, depending on elimination model, parameter set, and regression model, have not been described previously and are critical components in evaluating the dose-response data from the occupationally exposed populations.
Adjusted variable plots for Cox's proportional hazards regression model.
Hall, C B; Zeger, S L; Bandeen-Roche, K J
1996-01-01
Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.
Factor complexity of crash occurrence: An empirical demonstration using boosted regression trees.
Chung, Yi-Shih
2013-12-01
Factor complexity is a characteristic of traffic crashes. This paper proposes a novel method, namely boosted regression trees (BRT), to investigate the complex and nonlinear relationships in high-variance traffic crash data. The Taiwanese 2004-2005 single-vehicle motorcycle crash data are used to demonstrate the utility of BRT. Traditional logistic regression and classification and regression tree (CART) models are also used to compare their estimation results and external validities. Both the in-sample cross-validation and out-of-sample validation results show that an increase in tree complexity provides improved, although declining, classification performance, indicating a limited factor complexity of single-vehicle motorcycle crashes. The effects of crucial variables including geographical, time, and sociodemographic factors explain some fatal crashes. Relatively unique fatal crashes are better approximated by interactive terms, especially combinations of behavioral factors. BRT models generally provide improved transferability than conventional logistic regression and CART models. This study also discusses the implications of the results for devising safety policies. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine
2016-04-01
Scenarios of surface weather required for the impact studies have to be unbiased and adapted to the space and time scales of the considered hydro-systems. Hence, surface weather scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Among those SDMs, approaches based on regression are often applied. For a given station, a regression link can be established between a set of large scale atmospheric predictors and the surface weather variable. These links are then used for the prediction of the latter. However, physical processes generating surface weather vary in time. This is well known for precipitation for instance. The most relevant predictors and the regression link are also likely to vary in time. A better prediction skill is thus classically obtained with a seasonal stratification of the data. Another strategy is to identify the most relevant predictor set and establish the regression link from dates that are similar - or analog - to the target date. In practice, these dates can be selected thanks to an analog model. In this study, we explore the possibility of improving the local performance of an analog model - where the analogy is applied to the geopotential heights 1000 and 500 hPa - using additional local scale predictors for the probabilistic prediction of the Safran precipitation over France. For each prediction day, the prediction is obtained from two GLM regression models - for both the occurrence and the quantity of precipitation - for which predictors and parameters are estimated from the analog dates. Firstly, the resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. Secondly, the selected predictors for a given prediction depend on the large scale situation and on the considered region. Finally, even with such an adaptive predictor identification, the downscaling link appears to be robust: for a same prediction day, predictors selected for different locations of a given region are similar and the regression parameters are consistent within the region of interest.
Linear models for calculating digestibile energy for sheep diets.
Fonnesbeck, P V; Christiansen, M L; Harris, L E
1981-05-01
Equations for estimating the digestible energy (DE) content of sheep diets were generated from the chemical contents and a factorial description of diets fed to lambs in digestion trials. The diet factors were two forages (alfalfa and grass hay), harvested at three stages of maturity (late vegetative, early bloom and full bloom), fed in two ingredient combinations (all hay or a 50:50 hay and corn grain mixture) and prepared by two forage texture processes (coarsely chopped or finely chopped and pelleted). The 2 x 3 x 2 x 2 factorial arrangement produced 24 diet treatments. These were replicated twice, for a total of 48 lamb digestion trials. In model 1 regression equations, DE was calculated directly from chemical composition of the diet. In model 2, regression equations predicted the percentage of digested nutrient from the chemical contents of the diet and then DE of the diet was calculated as the sum of the gross energy of the digested organic components. Expanded forms of model 1 and model 2 were also developed that included diet factors as qualitative indicator variables to adjust the regression constant and regression coefficients for the diet description. The expanded forms of the equations accounted for significantly more variation in DE than did the simple models and more accurately estimated DE of the diet. Information provided by the diet description proved as useful as chemical analyses for the prediction of digestibility of nutrients. The statistics indicate that, with model 1, neutral detergent fiber and plant cell wall analyses provided as much information for the estimation of DE as did model 2 with the combined information from crude protein, available carbohydrate, total lipid, cellulose and hemicellulose. Regression equations are presented for estimating DE with the most currently analyzed organic components, including linear and curvilinear variables and diet factors that significantly reduce the standard error of the estimate. To estimate De of a diet, the user utilizes the equation that uses the chemical analysis information and diet description most effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd
Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test ofmore » the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.« less
NASA Astrophysics Data System (ADS)
Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd; Baharum, Adam
2015-10-01
Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test of the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.
NASA Astrophysics Data System (ADS)
Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.
2014-07-01
Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.
Yu, Lijing; Zhou, Lingling; Tan, Li; Jiang, Hongbo; Wang, Ying; Wei, Sheng; Nie, Shaofa
2014-01-01
Outbreaks of hand-foot-mouth disease (HFMD) have been reported for many times in Asia during the last decades. This emerging disease has drawn worldwide attention and vigilance. Nowadays, the prevention and control of HFMD has become an imperative issue in China. Early detection and response will be helpful before it happening, using modern information technology during the epidemic. In this paper, a hybrid model combining seasonal auto-regressive integrated moving average (ARIMA) model and nonlinear auto-regressive neural network (NARNN) is proposed to predict the expected incidence cases from December 2012 to May 2013, using the retrospective observations obtained from China Information System for Disease Control and Prevention from January 2008 to November 2012. The best-fitted hybrid model was combined with seasonal ARIMA [Formula: see text] and NARNN with 15 hidden units and 5 delays. The hybrid model makes the good forecasting performance and estimates the expected incidence cases from December 2012 to May 2013, which are respectively -965.03, -1879.58, 4138.26, 1858.17, 4061.86 and 6163.16 with an obviously increasing trend. The model proposed in this paper can predict the incidence trend of HFMD effectively, which could be helpful to policy makers. The usefulness of expected cases of HFMD perform not only in detecting outbreaks or providing probability statements, but also in providing decision makers with a probable trend of the variability of future observations that contains both historical and recent information.
NASA Astrophysics Data System (ADS)
Fernández-Manso, O.; Fernández-Manso, A.; Quintano, C.
2014-09-01
Aboveground biomass (AGB) estimation from optical satellite data is usually based on regression models of original or synthetic bands. To overcome the poor relation between AGB and spectral bands due to mixed-pixels when a medium spatial resolution sensor is considered, we propose to base the AGB estimation on fraction images from Linear Spectral Mixture Analysis (LSMA). Our study area is a managed Mediterranean pine woodland (Pinus pinaster Ait.) in central Spain. A total of 1033 circular field plots were used to estimate AGB from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) optical data. We applied Pearson correlation statistics and stepwise multiple regression to identify suitable predictors from the set of variables of original bands, fraction imagery, Normalized Difference Vegetation Index and Tasselled Cap components. Four linear models and one nonlinear model were tested. A linear combination of ASTER band 2 (red, 0.630-0.690 μm), band 8 (short wave infrared 5, 2.295-2.365 μm) and green vegetation fraction (from LSMA) was the best AGB predictor (Radj2=0.632, the root-mean-squared error of estimated AGB was 13.3 Mg ha-1 (or 37.7%), resulting from cross-validation), rather than other combinations of the above cited independent variables. Results indicated that using ASTER fraction images in regression models improves the AGB estimation in Mediterranean pine forests. The spatial distribution of the estimated AGB, based on a multiple linear regression model, may be used as baseline information for forest managers in future studies, such as quantifying the regional carbon budget, fuel accumulation or monitoring of management practices.
NASA Astrophysics Data System (ADS)
Hadley, Brian Christopher
This dissertation assessed remotely sensed data and geospatial modeling technique(s) to map the spatial distribution of total above-ground biomass present on the surface of the Savannah River National Laboratory's (SRNL) Mixed Waste Management Facility (MWMF) hazardous waste landfill. Ordinary least squares (OLS) regression, regression kriging, and tree-structured regression were employed to model the empirical relationship between in-situ measured Bahia (Paspalum notatum Flugge) and Centipede [Eremochloa ophiuroides (Munro) Hack.] grass biomass against an assortment of explanatory variables extracted from fine spatial resolution passive optical and LIDAR remotely sensed data. Explanatory variables included: (1) discrete channels of visible, near-infrared (NIR), and short-wave infrared (SWIR) reflectance, (2) spectral vegetation indices (SVI), (3) spectral mixture analysis (SMA) modeled fractions, (4) narrow-band derivative-based vegetation indices, and (5) LIDAR derived topographic variables (i.e. elevation, slope, and aspect). Results showed that a linear combination of the first- (1DZ_DGVI), second- (2DZ_DGVI), and third-derivative of green vegetation indices (3DZ_DGVI) calculated from hyperspectral data recorded over the 400--960 nm wavelengths of the electromagnetic spectrum explained the largest percentage of statistical variation (R2 = 0.5184) in the total above-ground biomass measurements. In general, the topographic variables did not correlate well with the MWMF biomass data, accounting for less than five percent of the statistical variation. It was concluded that tree-structured regression represented the optimum geospatial modeling technique due to a combination of model performance and efficiency/flexibility factors.
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.
Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.
2014-01-01
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152
Code of Federal Regulations, 2010 CFR
2010-01-01
... systems, building load simulation models, statistical regression analysis, or some combination of these..., excluding any cogeneration process for other than a federally owned building or buildings or other federally...
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Bayesian Analysis of Nonlinear Structural Equation Models with Nonignorable Missing Data
ERIC Educational Resources Information Center
Lee, Sik-Yum
2006-01-01
A Bayesian approach is developed for analyzing nonlinear structural equation models with nonignorable missing data. The nonignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm is used to produce the joint Bayesian estimates of…
Evaluation and application of regional turbidity-sediment regression models in Virginia
Hyer, Kenneth; Jastram, John D.; Moyer, Douglas; Webber, James S.; Chanat, Jeffrey G.
2015-01-01
Conventional thinking has long held that turbidity-sediment surrogate-regression equations are site specific and that regression equations developed at a single monitoring station should not be applied to another station; however, few studies have evaluated this issue in a rigorous manner. If robust regional turbidity-sediment models can be developed successfully, their applications could greatly expand the usage of these methods. Suspended sediment load estimation could occur as soon as flow and turbidity monitoring commence at a site, suspended sediment sampling frequencies for various projects potentially could be reduced, and special-project applications (sediment monitoring following dam removal, for example) could be significantly enhanced. The objective of this effort was to investigate the turbidity-suspended sediment concentration (SSC) relations at all available USGS monitoring sites within Virginia to determine whether meaningful turbidity-sediment regression models can be developed by combining the data from multiple monitoring stations into a single model, known as a “regional” model. Following the development of the regional model, additional objectives included a comparison of predicted SSCs between the regional model and commonly used site-specific models, as well as an evaluation of why specific monitoring stations did not fit the regional model.
Conditional Density Estimation with HMM Based Support Vector Machines
NASA Astrophysics Data System (ADS)
Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang
Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.
Improved model of the retardance in citric acid coated ferrofluids using stepwise regression
NASA Astrophysics Data System (ADS)
Lin, J. F.; Qiu, X. R.
2017-06-01
Citric acid (CA) coated Fe3O4 ferrofluids (FFs) have been conducted for biomedical application. The magneto-optical retardance of CA coated FFs was measured by a Stokes polarimeter. Optimization and multiple regression of retardance in FFs were executed by Taguchi method and Microsoft Excel previously, and the F value of regression model was large enough. However, the model executed by Excel was not systematic. Instead we adopted the stepwise regression to model the retardance of CA coated FFs. From the results of stepwise regression by MATLAB, the developed model had highly predictable ability owing to F of 2.55897e+7 and correlation coefficient of one. The average absolute error of predicted retardances to measured retardances was just 0.0044%. Using the genetic algorithm (GA) in MATLAB, the optimized parametric combination was determined as [4.709 0.12 39.998 70.006] corresponding to the pH of suspension, molar ratio of CA to Fe3O4, CA volume, and coating temperature. The maximum retardance was found as 31.712°, close to that obtained by evolutionary solver in Excel and a relative error of -0.013%. Above all, the stepwise regression method was successfully used to model the retardance of CA coated FFs, and the maximum global retardance was determined by the use of GA.
Probabilistic Forecasting of Surface Ozone with a Novel Statistical Approach
NASA Technical Reports Server (NTRS)
Balashov, Nikolay V.; Thompson, Anne M.; Young, George S.
2017-01-01
The recent change in the Environmental Protection Agency's surface ozone regulation, lowering the surface ozone daily maximum 8-h average (MDA8) exceedance threshold from 75 to 70 ppbv, poses significant challenges to U.S. air quality (AQ) forecasters responsible for ozone MDA8 forecasts. The forecasters, supplied by only a few AQ model products, end up relying heavily on self-developed tools. To help U.S. AQ forecasters, this study explores a surface ozone MDA8 forecasting tool that is based solely on statistical methods and standard meteorological variables from the numerical weather prediction (NWP) models. The model combines the self-organizing map (SOM), which is a clustering technique, with a step wise weighted quadratic regression using meteorological variables as predictors for ozone MDA8. The SOM method identifies different weather regimes, to distinguish between various modes of ozone variability, and groups them according to similarity. In this way, when a regression is developed for a specific regime, data from the other regimes are also used, with weights that are based on their similarity to this specific regime. This approach, regression in SOM (REGiS), yields a distinct model for each regime taking into account both the training cases for that regime and other similar training cases. To produce probabilistic MDA8 ozone forecasts, REGiS weighs and combines all of the developed regression models on the basis of the weather patterns predicted by an NWP model. REGiS is evaluated over the San Joaquin Valley in California and the northeastern plains of Colorado. The results suggest that the model performs best when trained and adjusted separately for an individual AQ station and its corresponding meteorological site.
Lim, Jongguk; Kim, Giyoung; Mo, Changyeun; Kim, Moon S; Chao, Kuanglin; Qin, Jianwei; Fu, Xiaping; Baek, Insuck; Cho, Byoung-Kwan
2016-05-01
Illegal use of nitrogen-rich melamine (C3H6N6) to boost perceived protein content of food products such as milk, infant formula, frozen yogurt, pet food, biscuits, and coffee drinks has caused serious food safety problems. Conventional methods to detect melamine in foods, such as Enzyme-linked immunosorbent assay (ELISA), High-performance liquid chromatography (HPLC), and Gas chromatography-mass spectrometry (GC-MS), are sensitive but they are time-consuming, expensive, and labor-intensive. In this research, near-infrared (NIR) hyperspectral imaging technique combined with regression coefficient of partial least squares regression (PLSR) model was used to detect melamine particles in milk powders easily and quickly. NIR hyperspectral reflectance imaging data in the spectral range of 990-1700nm were acquired from melamine-milk powder mixture samples prepared at various concentrations ranging from 0.02% to 1%. PLSR models were developed to correlate the spectral data (independent variables) with melamine concentration (dependent variables) in melamine-milk powder mixture samples. PLSR models applying various pretreatment methods were used to reconstruct the two-dimensional PLS images. PLS images were converted to the binary images to detect the suspected melamine pixels in milk powder. As the melamine concentration was increased, the numbers of suspected melamine pixels of binary images were also increased. These results suggested that NIR hyperspectral imaging technique and the PLSR model can be regarded as an effective tool to detect melamine particles in milk powders. Copyright © 2016 Elsevier B.V. All rights reserved.
Flexible Meta-Regression to Assess the Shape of the Benzene–Leukemia Exposure–Response Curve
Vlaanderen, Jelle; Portengen, Lützen; Rothman, Nathaniel; Lan, Qing; Kromhout, Hans; Vermeulen, Roel
2010-01-01
Background Previous evaluations of the shape of the benzene–leukemia exposure–response curve (ERC) were based on a single set or on small sets of human occupational studies. Integrating evidence from all available studies that are of sufficient quality combined with flexible meta-regression models is likely to provide better insight into the functional relation between benzene exposure and risk of leukemia. Objectives We used natural splines in a flexible meta-regression method to assess the shape of the benzene–leukemia ERC. Methods We fitted meta-regression models to 30 aggregated risk estimates extracted from nine human observational studies and performed sensitivity analyses to assess the impact of a priori assessed study characteristics on the predicted ERC. Results The natural spline showed a supralinear shape at cumulative exposures less than 100 ppm-years, although this model fitted the data only marginally better than a linear model (p = 0.06). Stratification based on study design and jackknifing indicated that the cohort studies had a considerable impact on the shape of the ERC at high exposure levels (> 100 ppm-years) but that predicted risks for the low exposure range (< 50 ppm-years) were robust. Conclusions Although limited by the small number of studies and the large heterogeneity between studies, the inclusion of all studies of sufficient quality combined with a flexible meta-regression method provides the most comprehensive evaluation of the benzene–leukemia ERC to date. The natural spline based on all data indicates a significantly increased risk of leukemia [relative risk (RR) = 1.14; 95% confidence interval (CI), 1.04–1.26] at an exposure level as low as 10 ppm-years. PMID:20064779
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Zhang, Tao; Yang, Xiaojun
2013-01-01
Watershed-wide land-cover proportions can be used to predict the in-stream non-point source pollutant loadings through regression modeling. However, the model performance can vary greatly across different study sites and among various watersheds. Existing literature has shown that this type of regression modeling tends to perform better for large watersheds than for small ones, and that such a performance variation has been largely linked with different interwatershed landscape heterogeneity levels. The purpose of this study is to further examine the previously mentioned empirical observation based on a set of watersheds in the northern part of Georgia (USA) to explore the underlying causes of the variation in model performance. Through the combined use of the neutral landscape modeling approach and a spatially explicit nutrient loading model, we tested whether the regression model performance variation over the watershed groups ranging in size is due to the different watershed landscape heterogeneity levels. We adopted three neutral landscape modeling criteria that were tied with different similarity levels in watershed landscape properties and used the nutrient loading model to estimate the nitrogen loads for these neutral watersheds. Then we compared the regression model performance for the real and neutral landscape scenarios, respectively. We found that watershed size can affect the regression model performance both directly and indirectly. Along with the indirect effect through interwatershed heterogeneity, watershed size can directly affect the model performance over the watersheds varying in size. We also found that the regression model performance can be more significantly affected by other physiographic properties shaping nitrogen delivery effectiveness than the watershed land-cover heterogeneity. This study contrasts with many existing studies because it goes beyond hypothesis formulation based on empirical observations and into hypothesis testing to explore the fundamental mechanism.
A bioavailable strontium isoscape for Western Europe: A machine learning approach
von Holstein, Isabella C. C.; Laffoon, Jason E.; Willmes, Malte; Liu, Xiao-Ming; Davies, Gareth R.
2018-01-01
Strontium isotope ratios (87Sr/86Sr) are gaining considerable interest as a geolocation tool and are now widely applied in archaeology, ecology, and forensic research. However, their application for provenance requires the development of baseline models predicting surficial 87Sr/86Sr variations (“isoscapes”). A variety of empirically-based and process-based models have been proposed to build terrestrial 87Sr/86Sr isoscapes but, in their current forms, those models are not mature enough to be integrated with continuous-probability surface models used in geographic assignment. In this study, we aim to overcome those limitations and to predict 87Sr/86Sr variations across Western Europe by combining process-based models and a series of remote-sensing geospatial products into a regression framework. We find that random forest regression significantly outperforms other commonly used regression and interpolation methods, and efficiently predicts the multi-scale patterning of 87Sr/86Sr variations by accounting for geological, geomorphological and atmospheric controls. Random forest regression also provides an easily interpretable and flexible framework to integrate different types of environmental auxiliary variables required to model the multi-scale patterning of 87Sr/86Sr variability. The method is transferable to different scales and resolutions and can be applied to the large collection of geospatial data available at local and global levels. The isoscape generated in this study provides the most accurate 87Sr/86Sr predictions in bioavailable strontium for Western Europe (R2 = 0.58 and RMSE = 0.0023) to date, as well as a conservative estimate of spatial uncertainty by applying quantile regression forest. We anticipate that the method presented in this study combined with the growing numbers of bioavailable 87Sr/86Sr data and satellite geospatial products will extend the applicability of the 87Sr/86Sr geo-profiling tool in provenance applications. PMID:29847595
Robust Semi-Active Ride Control under Stochastic Excitation
2014-01-01
broad classes of time-series models which are of practical importance; the Auto-Regressive (AR) models, the Integrated (I) models, and the Moving...Average (MA) models [12]. Combinations of these models result in autoregressive moving average (ARMA) and autoregressive integrated moving average...Down Up 4) Down Down These four cases can be written in compact form as: (20) Where is the Heaviside
Forecasting Techniques and Library Circulation Operations: Implications for Management.
ERIC Educational Resources Information Center
Ahiakwo, Okechukwu N.
1988-01-01
Causal regression and time series models were developed using six years of data for home borrowing, average readership, and books consulted at a university library. The models were tested for efficacy in producing short-term planning and control data. Combined models were tested in establishing evaluation measures. (10 references) (Author/MES)
Rapid Detection of Volatile Oil in Mentha haplocalyx by Near-Infrared Spectroscopy and Chemometrics.
Yan, Hui; Guo, Cheng; Shao, Yang; Ouyang, Zhen
2017-01-01
Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . The effects of data pre-processing methods on the accuracy of the PLSR calibration models were investigated. The performance of the final model was evaluated according to the correlation coefficient ( R ) and root mean square error of prediction (RMSEP). For PLSR model, the best preprocessing method combination was first-order derivative, standard normal variate transformation (SNV), and mean centering, which had of 0.8805, of 0.8719, RMSEC of 0.091, and RMSEP of 0.097, respectively. The wave number variables linking to volatile oil are from 5500 to 4000 cm-1 by analyzing the loading weights and variable importance in projection (VIP) scores. For SVM model, six LVs (less than seven LVs in PLSR model) were adopted in model, and the result was better than PLSR model. The and were 0.9232 and 0.9202, respectively, with RMSEC and RMSEP of 0.084 and 0.082, respectively, which indicated that the predicted values were accurate and reliable. This work demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in M. haplocalyx . The quality of medicine directly links to clinical efficacy, thus, it is important to control the quality of Mentha haplocalyx . Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . For SVM model, 6 LVs (less than 7 LVs in PLSR model) were adopted in model, and the result was better than PLSR model. It demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in Mentha haplocalyx . Abbreviations used: 1 st der: First-order derivative; 2 nd der: Second-order derivative; LOO: Leave-one-out; LVs: Latent variables; MC: Mean centering, NIR: Near-infrared; NIRS: Near infrared spectroscopy; PCR: Principal component regression, PLSR: Partial least squares regression; RBF: Radial basis function; RMSEC: Root mean square error of cross validation, RMSEC: Root mean square error of calibration; RMSEP: Root mean square error of prediction; SNV: Standard normal variate transformation; SVM: Support vector machine; VIP: Variable Importance in projection.
USDA-ARS?s Scientific Manuscript database
Illegal use of nitrogen-rich melamine (C3H6N6) to boost perceived protein content of food products such as milk, infant formula, frozen yogurt, pet food, biscuits, and coffee drinks has caused serious food safety problems. Conventional methods to detect melamine in foods, such as Enzyme-linked immun...
Hoffman, Haydn; Lee, Sunghoon I; Garst, Jordan H; Lu, Derek S; Li, Charles H; Nagasawa, Daniel T; Ghalehsari, Nima; Jahanforouz, Nima; Razaghy, Mehrdad; Espinal, Marie; Ghavamrezaii, Amir; Paak, Brian H; Wu, Irene; Sarrafzadeh, Majid; Lu, Daniel C
2015-09-01
This study introduces the use of multivariate linear regression (MLR) and support vector regression (SVR) models to predict postoperative outcomes in a cohort of patients who underwent surgery for cervical spondylotic myelopathy (CSM). Currently, predicting outcomes after surgery for CSM remains a challenge. We recruited patients who had a diagnosis of CSM and required decompressive surgery with or without fusion. Fine motor function was tested preoperatively and postoperatively with a handgrip-based tracking device that has been previously validated, yielding mean absolute accuracy (MAA) results for two tracking tasks (sinusoidal and step). All patients completed Oswestry disability index (ODI) and modified Japanese Orthopaedic Association questionnaires preoperatively and postoperatively. Preoperative data was utilized in MLR and SVR models to predict postoperative ODI. Predictions were compared to the actual ODI scores with the coefficient of determination (R(2)) and mean absolute difference (MAD). From this, 20 patients met the inclusion criteria and completed follow-up at least 3 months after surgery. With the MLR model, a combination of the preoperative ODI score, preoperative MAA (step function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.452; MAD=0.0887; p=1.17 × 10(-3)). With the SVR model, a combination of preoperative ODI score, preoperative MAA (sinusoidal function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.932; MAD=0.0283; p=5.73 × 10(-12)). The SVR model was more accurate than the MLR model. The SVR can be used preoperatively in risk/benefit analysis and the decision to operate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimizing methods for linking cinematic features to fMRI data.
Kauttonen, Janne; Hlushchuk, Yevhen; Tikka, Pia
2015-04-15
One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret observed brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film 'At Land' by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model-driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods. In the linear regression analysis, both IC and region-of-interest (ROI) time-series were fitted with time-series of a total of 36 binary-valued and one real-valued tactile annotation of film features. The elastic-net regularization and cross-validation were applied in the ordinary least-squares linear regression in order to avoid over-fitting due to the multicollinearity of regressors, the results were compared against both the partial least-squares (PLS) regression and the un-regularized full-model regression. Non-parametric permutation testing scheme was applied to evaluate the statistical significance of regression. We found statistically significant correlation between the annotation model and 9 ICs out of 40 ICs. Regression analysis was also repeated for a large set of cubic ROIs covering the grey matter. Both IC- and ROI-based regression analyses revealed activations in parietal and occipital regions, with additional smaller clusters in the frontal lobe. Furthermore, we found elastic-net based regression more sensitive than PLS and un-regularized regression since it detected a larger number of significant ICs and ROIs. Along with the ISC ranking methods, our regression analysis proved a feasible method for ordering the ICs based on their functional relevance to the annotated cinematic features. The novelty of our method is - in comparison to the hypothesis-driven manual pre-selection and observation of some individual regressors biased by choice - in applying data-driven approach to all content features simultaneously. We found especially the combination of regularized regression and ICA useful when analyzing fMRI data obtained using non-narrative movie stimulus with a large set of complex and correlated features. Copyright © 2015. Published by Elsevier Inc.
Simulation of land use change in the three gorges reservoir area based on CART-CA
NASA Astrophysics Data System (ADS)
Yuan, Min
2018-05-01
This study proposes a new method to simulate spatiotemporal complex multiple land uses by using classification and regression tree algorithm (CART) based CA model. In this model, we use classification and regression tree algorithm to calculate land class conversion probability, and combine neighborhood factor, random factor to extract cellular transformation rules. The overall Kappa coefficient is 0.8014 and the overall accuracy is 0.8821 in the land dynamic simulation results of the three gorges reservoir area from 2000 to 2010, and the simulation results are satisfactory.
Gao, Yongnian; Gao, Junfeng; Yin, Hongbin; Liu, Chuansheng; Xia, Ting; Wang, Jing; Huang, Qi
2015-03-15
Remote sensing has been widely used for ater quality monitoring, but most of these monitoring studies have only focused on a few water quality variables, such as chlorophyll-a, turbidity, and total suspended solids, which have typically been considered optically active variables. Remote sensing presents a challenge in estimating the phosphorus concentration in water. The total phosphorus (TP) in lakes has been estimated from remotely sensed observations, primarily using the simple individual band ratio or their natural logarithm and the statistical regression method based on the field TP data and the spectral reflectance. In this study, we investigated the possibility of establishing a spatial modeling scheme to estimate the TP concentration of a large lake from multi-spectral satellite imagery using band combinations and regional multivariate statistical modeling techniques, and we tested the applicability of the spatial modeling scheme. The results showed that HJ-1A CCD multi-spectral satellite imagery can be used to estimate the TP concentration in a lake. The correlation and regression analysis showed a highly significant positive relationship between the TP concentration and certain remotely sensed combination variables. The proposed modeling scheme had a higher accuracy for the TP concentration estimation in the large lake compared with the traditional individual band ratio method and the whole-lake scale regression-modeling scheme. The TP concentration values showed a clear spatial variability and were high in western Lake Chaohu and relatively low in eastern Lake Chaohu. The northernmost portion, the northeastern coastal zone and the southeastern portion of western Lake Chaohu had the highest TP concentrations, and the other regions had the lowest TP concentration values, except for the coastal zone of eastern Lake Chaohu. These results strongly suggested that the proposed modeling scheme, i.e., the band combinations and the regional multivariate statistical modeling techniques, demonstrated advantages for estimating the TP concentration in a large lake and had a strong potential for universal application for the TP concentration estimation in large lake waters worldwide. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Holburn, E. R.; Bledsoe, B. P.; Poff, N. L.; Cuhaciyan, C. O.
2005-05-01
Using over 300 R/EMAP sites in OR and WA, we examine the relative explanatory power of watershed, valley, and reach scale descriptors in modeling variation in benthic macroinvertebrate indices. Innovative metrics describing flow regime, geomorphic processes, and hydrologic-distance weighted watershed and valley characteristics are used in multiple regression and regression tree modeling to predict EPT richness, % EPT, EPT/C, and % Plecoptera. A nested design using seven ecoregions is employed to evaluate the influence of geographic scale and environmental heterogeneity on the explanatory power of individual and combined scales. Regression tree models are constructed to explain variability while identifying threshold responses and interactions. Cross-validated models demonstrate differences in the explanatory power associated with single-scale and multi-scale models as environmental heterogeneity is varied. Models explaining the greatest variability in biological indices result from multi-scale combinations of physical descriptors. Results also indicate that substantial variation in benthic macroinvertebrate response can be explained with process-based watershed and valley scale metrics derived exclusively from common geospatial data. This study outlines a general framework for identifying key processes driving macroinvertebrate assemblages across a range of scales and establishing the geographic extent at which various levels of physical description best explain biological variability. Such information can guide process-based stratification to avoid spurious comparison of dissimilar stream types in bioassessments and ensure that key environmental gradients are adequately represented in sampling designs.
NASA Astrophysics Data System (ADS)
Dalkilic, Turkan Erbay; Apaydin, Aysen
2009-11-01
In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.
NASA Astrophysics Data System (ADS)
Mfumu Kihumba, Antoine; Ndembo Longo, Jean; Vanclooster, Marnik
2016-03-01
A multivariate statistical modelling approach was applied to explain the anthropogenic pressure of nitrate pollution on the Kinshasa groundwater body (Democratic Republic of Congo). Multiple regression and regression tree models were compared and used to identify major environmental factors that control the groundwater nitrate concentration in this region. The analyses were made in terms of physical attributes related to the topography, land use, geology and hydrogeology in the capture zone of different groundwater sampling stations. For the nitrate data, groundwater datasets from two different surveys were used. The statistical models identified the topography, the residential area, the service land (cemetery), and the surface-water land-use classes as major factors explaining nitrate occurrence in the groundwater. Also, groundwater nitrate pollution depends not on one single factor but on the combined influence of factors representing nitrogen loading sources and aquifer susceptibility characteristics. The groundwater nitrate pressure was better predicted with the regression tree model than with the multiple regression model. Furthermore, the results elucidated the sensitivity of the model performance towards the method of delineation of the capture zones. For pollution modelling at the monitoring points, therefore, it is better to identify capture-zone shapes based on a conceptual hydrogeological model rather than to adopt arbitrary circular capture zones.
Statistical modeling of landslide hazard using GIS
Peter V. Gorsevski; Randy B. Foltz; Paul E. Gessler; Terrance W. Cundy
2001-01-01
A model for spatial prediction of landslide hazard was applied to a watershed affected by landslide events that occurred during the winter of 1995-96, following heavy rains, and snowmelt. Digital elevation data with 22.86 m x 22.86 m resolution was used for deriving topographic attributes used for modeling. The model is based on the combination of logistic regression...
Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng
2018-05-03
Concentrations of particulate matter with aerodynamic diameter <2.5 μm (PM 2.5 ) are relatively high in China. Estimation of PM 2.5 exposure is complex because PM 2.5 exhibits complex spatiotemporal patterns. To improve the validity of exposure predictions, several methods have been developed and applied worldwide. A hybrid approach combining a land use regression (LUR) model and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2 = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Carver, Brett S; Chapinski, Caren; Wongvipat, John; Hieronymus, Haley; Chen, Yu; Chandarlapaty, Sarat; Arora, Vivek K; Le, Carl; Koutcher, Jason; Scher, Howard; Scardino, Peter T; Rosen, Neal; Sawyers, Charles L
2011-01-01
Summary Prostate cancer is characterized by its dependence on androgen receptor and frequent activation of PI3K signaling. We find that AR transcriptional output is decreased in human and murine tumors with PTEN deletion and that PI3K pathway inhibition activates AR signaling by relieving feedback inhibition of HER kinases. Similarly, AR inhibition activates AKT signaling by reducing levels of the AKT phosphatase PHLPP. Thus, these two oncogenic pathways cross-regulate each other by reciprocal feedback. Inhibition of one activates the other, thereby maintaining tumor cell survival. However, combined pharmacologic inhibition of PI3K and AR signaling caused near complete prostate cancer regressions in a Pten-deficient murine prostate cancer model and in human prostate cancer xenografts, indicating that both pathways coordinately support survival. Significance The two most frequently activated signaling pathways in prostate cancer are driven by AR and PI3K. Inhibitors of the PI3K pathway are in early clinical trials and AR inhibitors confer clinical responses in most patients. However, these inhibitors rarely induce tumor regression in preclinical models. Here we show that these pathways regulate each other by reciprocal negative feedback, such that inhibition of one activates the other. Therefore, tumor cells can adapt and survive when either single pathway is inhibited pharmacologically. Our demonstration of profound tumor regressions with combined pathway inhibition in preclinical prostate tumor models provides rationale for combination therapy in patients. PMID:21575859
Chakraborty, Somsubhra; Weindorf, David C; Li, Bin; Ali Aldabaa, Abdalsamad Abdalsatar; Ghosh, Rakesh Kumar; Paul, Sathi; Nasim Ali, Md
2015-05-01
Using 108 petroleum contaminated soil samples, this pilot study proposed a new analytical approach of combining visible near-infrared diffuse reflectance spectroscopy (VisNIR DRS) and portable X-ray fluorescence spectrometry (PXRF) for rapid and improved quantification of soil petroleum contamination. Results indicated that an advanced fused model where VisNIR DRS spectra-based penalized spline regression (PSR) was used to predict total petroleum hydrocarbon followed by PXRF elemental data-based random forest regression was used to model the PSR residuals, it outperformed (R(2)=0.78, residual prediction deviation (RPD)=2.19) all other models tested, even producing better generalization than using VisNIR DRS alone (RPD's of 1.64, 1.86, and 1.96 for random forest, penalized spline regression, and partial least squares regression, respectively). Additionally, unsupervised principal component analysis using the PXRF+VisNIR DRS system qualitatively separated contaminated soils from control samples. Fusion of PXRF elemental data and VisNIR derivative spectra produced an optimized model for total petroleum hydrocarbon quantification in soils. Copyright © 2015 Elsevier B.V. All rights reserved.
Li, Guowei; Thabane, Lehana; Delate, Thomas; Witt, Daniel M.; Levine, Mitchell A. H.; Cheng, Ji; Holbrook, Anne
2016-01-01
Objectives To construct and validate a prediction model for individual combined benefit and harm outcomes (stroke with no major bleeding, major bleeding with no stroke, neither event, or both) in patients with atrial fibrillation (AF) with and without warfarin therapy. Methods Using the Kaiser Permanente Colorado databases, we included patients newly diagnosed with AF between January 1, 2005 and December 31, 2012 for model construction and validation. The primary outcome was a prediction model of composite of stroke or major bleeding using polytomous logistic regression (PLR) modelling. The secondary outcome was a prediction model of all-cause mortality using the Cox regression modelling. Results We included 9074 patients with 4537 and 4537 warfarin users and non-users, respectively. In the derivation cohort (n = 4632), there were 136 strokes (2.94%), 280 major bleedings (6.04%) and 1194 deaths (25.78%) occurred. In the prediction models, warfarin use was not significantly associated with risk of stroke, but increased the risk of major bleeding and decreased the risk of death. Both the PLR and Cox models were robust, internally and externally validated, and with acceptable model performances. Conclusions In this study, we introduce a new methodology for predicting individual combined benefit and harm outcomes associated with warfarin therapy for patients with AF. Should this approach be validated in other patient populations, it has potential advantages over existing risk stratification approaches as a patient-physician aid for shared decision-making PMID:27513986
I show that a conditional probability analysis using a stressor-response model based on a logistic regression provides a useful approach for developing candidate water quality criteria from empirical data, such as the Maryland Biological Streams Survey (MBSS) data.
Organizational Reward Systems: Implications for Climate.
1982-09-01
4 about here Further, a comparison of hierarchical regression models incorporating the combined sets of perceived climate , attributions and...Moreover, the data question organizational models that over-emphasize a directional flow from organizational policy - perceived climate - job attitudes...113. Reward Climate 34 Mayes, B. T. Some boundary considerations in the application of motivation models . Academy of Management Review, 1978, 3(l
Lange, Rogier; ter Heine, Rob; van Wieringen, Wessel N; Tromp, Adrienne M; Paap, Mayke; Bloemendal, Haiko J; de Klerk, John M H; Hendrikse, N Harry; Geldof, Albert A
2017-02-01
Rhenium-188-HEDP is an effective radiopharmaceutical for the treatment of painful bone metastases from prostate cancer. The effectiveness of the β-radiation emitted by 188 Re might be enhanced by combination with chemotherapy, using the radiosensitization concept. Therefore, the authors investigated the combined treatment of the taxanes, docetaxel and cabazitaxel, with 188 Re in prostate carcinoma cell lines. The cytotoxic effects of single and combined treatment with taxanes and 188 Re were investigated in three human prostate carcinoma cell lines (PC-3, DU 145, and LNCaP), using the colony-forming assay. The half maximal effective concentration (EC50) of all individual agents was determined. The combined treatment was studied at 0.25, 0.5, 1, 2, and 4 times the EC50 of each agent. The interaction was investigated with a regression model. The survival curves showed dose-dependent cell growth inhibition for both the taxanes and 188 Re. The regression model showed a good capability of explaining the data. It proved additivity in all combination experiments and confirmed a general trend to a slight subadditive effect. This proof-of-mechanism study exploring radiosensitization by combining 188 Re and taxanes showed no synergism, but significant additivity. This encourages the design of in vivo studies. Future research should explore the potential added value of concomitant treatment of bone metastases with chemotherapy and 188 Re-HEDP.
NASA Astrophysics Data System (ADS)
Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin
2018-03-01
The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.
Cox Regression Models with Functional Covariates for Survival Data.
Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M
2015-06-01
We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
A Solution to Separation and Multicollinearity in Multiple Logistic Regression
Shen, Jianzhao; Gao, Sujuan
2010-01-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
Shen, Jianzhao; Gao, Sujuan
2008-10-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
Above-ground biomass of mangrove species. I. Analysis of models
NASA Astrophysics Data System (ADS)
Soares, Mário Luiz Gomes; Schaeffer-Novelli, Yara
2005-10-01
This study analyzes the above-ground biomass of Rhizophora mangle and Laguncularia racemosa located in the mangroves of Bertioga (SP) and Guaratiba (RJ), Southeast Brazil. Its purpose is to determine the best regression model to estimate the total above-ground biomass and compartment (leaves, reproductive parts, twigs, branches, trunk and prop roots) biomass, indirectly. To do this, we used structural measurements such as height, diameter at breast-height (DBH), and crown area. A combination of regression types with several compositions of independent variables generated 2.272 models that were later tested. Subsequent analysis of the models indicated that the biomass of reproductive parts, branches, and prop roots yielded great variability, probably because of environmental factors and seasonality (in the case of reproductive parts). It also indicated the superiority of multiple regression to estimate above-ground biomass as it allows researchers to consider several aspects that affect above-ground biomass, specially the influence of environmental factors. This fact has been attested to the models that estimated the biomass of crown compartments.
Weighted functional linear regression models for gene-based association analysis.
Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I
2018-01-01
Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.
Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah
2018-07-01
In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula
2011-01-01
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.
Estimation of Chinese surface NO2 concentrations combining satellite data and Land Use Regression
NASA Astrophysics Data System (ADS)
Anand, J.; Monks, P.
2016-12-01
Monitoring surface-level air quality is often limited by in-situ instrument placement and issues arising from harmonisation over long timescales. Satellite instruments can offer a synoptic view of regional pollution sources, but in many cases only a total or tropospheric column can be measured. In this work a new technique of estimating surface NO2 combining both satellite and in-situ data is presented, in which a Land Use Regression (LUR) model is used to create high resolution pollution maps based on known predictor variables such as population density, road networks, and land cover. By employing a mixed effects approach, it is possible to take advantage of the spatiotemporal variability in the satellite-derived column densities to account for daily and regional variations in surface NO2 caused by factors such as temperature, elevation, and wind advection. In this work, surface NO2 maps are modelled over the North China Plain and Pearl River Delta during high-pollution episodes by combining in-situ measurements and tropospheric columns from the Ozone Monitoring Instrument (OMI). The modelled concentrations show good agreement with in-situ data and surface NO2 concentrations derived from the MACC-II global reanalysis.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Pragmatic estimation of a spatio-temporal air quality model with irregular monitoring data
NASA Astrophysics Data System (ADS)
Sampson, Paul D.; Szpiro, Adam A.; Sheppard, Lianne; Lindström, Johan; Kaufman, Joel D.
2011-11-01
Statistical analyses of health effects of air pollution have increasingly used GIS-based covariates for prediction of ambient air quality in "land use" regression models. More recently these spatial regression models have accounted for spatial correlation structure in combining monitoring data with land use covariates. We present a flexible spatio-temporal modeling framework and pragmatic, multi-step estimation procedure that accommodates essentially arbitrary patterns of missing data with respect to an ideally complete space by time matrix of observations on a network of monitoring sites. The methodology incorporates a model for smooth temporal trends with coefficients varying in space according to Partial Least Squares regressions on a large set of geographic covariates and nonstationary modeling of spatio-temporal residuals from these regressions. This work was developed to provide spatial point predictions of PM 2.5 concentrations for the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) using irregular monitoring data derived from the AQS regulatory monitoring network and supplemental short-time scale monitoring campaigns conducted to better predict intra-urban variation in air quality. We demonstrate the interpretation and accuracy of this methodology in modeling data from 2000 through 2006 in six U.S. metropolitan areas and establish a basis for likelihood-based estimation.
Song, Seung Yeob; Lee, Young Koung; Kim, In-Jung
2016-01-01
A high-throughput screening system for Citrus lines were established with higher sugar and acid contents using Fourier transform infrared (FT-IR) spectroscopy in combination with multivariate analysis. FT-IR spectra confirmed typical spectral differences between the frequency regions of 950-1100 cm(-1), 1300-1500 cm(-1), and 1500-1700 cm(-1). Principal component analysis (PCA) and subsequent partial least square-discriminant analysis (PLS-DA) were able to discriminate five Citrus lines into three separate clusters corresponding to their taxonomic relationships. The quantitative predictive modeling of sugar and acid contents from Citrus fruits was established using partial least square regression algorithms from FT-IR spectra. The regression coefficients (R(2)) between predicted values and estimated sugar and acid content values were 0.99. These results demonstrate that by using FT-IR spectra and applying quantitative prediction modeling to Citrus sugar and acid contents, excellent Citrus lines can be early detected with greater accuracy. Copyright © 2015 Elsevier Ltd. All rights reserved.
Developing a dengue forecast model using machine learning: A case study in China.
Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun
2017-10-01
In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.
Li, Yankun; Shao, Xueguang; Cai, Wensheng
2007-04-15
Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.
Newman, J; Egan, T; Harbourne, N; O'Riordan, D; Jacquier, J C; O'Sullivan, M
2014-08-01
Sensory evaluation can be problematic for ingredients with a bitter taste during research and development phase of new food products. In this study, 19 dairy protein hydrolysates (DPH) were analysed by an electronic tongue and their physicochemical characteristics, the data obtained from these methods were correlated with their bitterness intensity as scored by a trained sensory panel and each model was also assessed by its predictive capabilities. The physiochemical characteristics of the DPHs investigated were degree of hydrolysis (DH%), and data relating to peptide size and relative hydrophobicity from size exclusion chromatography (SEC) and reverse phase (RP) HPLC. Partial least square regression (PLS) was used to construct the prediction models. All PLS regressions had good correlations (0.78 to 0.93) with the strongest being the combination of data obtained from SEC and RP HPLC. However, the PLS with the strongest predictive power was based on the e-tongue which had the PLS regression with the lowest root mean predicted residual error sum of squares (PRESS) in the study. The results show that the PLS models constructed with the e-tongue and the combination of SEC and RP-HPLC has potential to be used for prediction of bitterness and thus reducing the reliance on sensory analysis in DPHs for future food research. Copyright © 2014 Elsevier B.V. All rights reserved.
Reconstruction of missing daily streamflow data using dynamic regression models
NASA Astrophysics Data System (ADS)
Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault
2015-12-01
River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-07-26
Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.
Analysis of a Rocket Based Combined Cycle Engine during Rocket Only Operation
NASA Technical Reports Server (NTRS)
Smith, T. D.; Steffen, C. J., Jr.; Yungster, S.; Keller, D. J.
1998-01-01
The all rocket mode of operation is a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. However, outside of performing experiments or a full three dimensional analysis, there are no first order parametric models to estimate performance. As a result, an axisymmetric RBCC engine was used to analytically determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and statistical regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, percent of injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inject diameter ratio. A perfect gas computational fluid dynamics analysis was performed to obtain values of vacuum specific impulse. Statistical regression analysis was performed based on both full flow and gas generator engine cycles. Results were also found to be dependent upon the entire cycle assumptions. The statistical regression analysis determined that there were five significant linear effects, six interactions, and one second-order effect. Two parametric models were created to provide performance assessments of an RBCC engine in the all rocket mode of operation.
Classical Testing in Functional Linear Models.
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.
NASA Astrophysics Data System (ADS)
Chen, Hua-cai; Chen, Xing-dan; Lu, Yong-jun; Cao, Zhi-qiang
2006-01-01
Near infrared (NIR) reflectance spectroscopy was used to develop a fast determination method for total ginsenosides in Ginseng (Panax Ginseng) powder. The spectra were analyzed with multiplicative signal correction (MSC) correlation method. The best correlative spectra region with the total ginsenosides content was 1660 nm~1880 nm and 2230nm~2380 nm. The NIR calibration models of ginsenosides were built with multiple linear regression (MLR), principle component regression (PCR) and partial least squares (PLS) regression respectively. The results showed that the calibration model built with PLS combined with MSC and the optimal spectrum region was the best one. The correlation coefficient and the root mean square error of correction validation (RMSEC) of the best calibration model were 0.98 and 0.15% respectively. The optimal spectrum region for calibration was 1204nm~2014nm. The result suggested that using NIR to rapidly determinate the total ginsenosides content in ginseng powder were feasible.
Classical Testing in Functional Linear Models
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155
Forecast model applications of retrieved three dimensional liquid water fields
NASA Technical Reports Server (NTRS)
Raymond, William H.; Olson, William S.
1990-01-01
Forecasts are made for tropical storm Emily using heating rates derived from the SSM/I physical retrievals described in chapters 2 and 3. Average values of the latent heating rates from the convective and stratiform cloud simulations, used in the physical retrieval, are obtained for individual 1.1 km thick vertical layers. Then, the layer-mean latent heating rates are regressed against the slant path-integrated liquid and ice precipitation water contents to determine the best fit two parameter regression coefficients for each layer. The regression formulae and retrieved precipitation water contents are utilized to infer the vertical distribution of heating rates for forecast model applications. In the forecast model, diabatic temperature contributions are calculated and used in a diabatic initialization, or in a diabatic initialization combined with a diabatic forcing procedure. Our forecasts show that the time needed to spin-up precipitation processes in tropical storm Emily is greatly accelerated through the application of the data.
Schilling, K.E.; Wolter, C.F.
2005-01-01
Nineteen variables, including precipitation, soils and geology, land use, and basin morphologic characteristics, were evaluated to develop Iowa regression models to predict total streamflow (Q), base flow (Qb), storm flow (Qs) and base flow percentage (%Qb) in gauged and ungauged watersheds in the state. Discharge records from a set of 33 watersheds across the state for the 1980 to 2000 period were separated into Qb and Qs. Multiple linear regression found that 75.5 percent of long term average Q was explained by rainfall, sand content, and row crop percentage variables, whereas 88.5 percent of Qb was explained by these three variables plus permeability and floodplain area variables. Qs was explained by average rainfall and %Qb was a function of row crop percentage, permeability, and basin slope variables. Regional regression models developed for long term average Q and Qb were adapted to annual rainfall and showed good correlation between measured and predicted values. Combining the regression model for Q with an estimate of mean annual nitrate concentration, a map of potential nitrate loads in the state was produced. Results from this study have important implications for understanding geomorphic and land use controls on streamflow and base flow in Iowa watersheds and similar agriculture dominated watersheds in the glaciated Midwest. (JAWRA) (Copyright ?? 2005).
Simultaneous confidence bands for Cox regression from semiparametric random censorship.
Mondal, Shoubhik; Subramanian, Sundarraman
2016-01-01
Cox regression is combined with semiparametric random censorship models to construct simultaneous confidence bands (SCBs) for subject-specific survival curves. Simulation results are presented to compare the performance of the proposed SCBs with the SCBs that are based only on standard Cox. The new SCBs provide correct empirical coverage and are more informative. The proposed SCBs are illustrated with two real examples. An extension to handle missing censoring indicators is also outlined.
Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks
Richter, Philipp; Toledano-Ayala, Manuel
2015-01-01
Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996
We show that a conditional probability analysis that utilizes a stressor-response model based on a logistic regression provides a useful approach for developing candidate water quality criterai from empirical data. The critical step in this approach is transforming the response ...
Policy Building--An Extension to User Modeling
ERIC Educational Resources Information Center
Yudelson, Michael V.; Brunskill, Emma
2012-01-01
In this paper we combine a logistic regression student model with an exercise selection procedure. As opposed to the body of prior work on strategies for selecting practice opportunities, we are working on an assumption of a finite amount of opportunities to teach the student. Our goal is to prescribe activities that would maximize the amount…
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping
2005-11-01
A new method is proposed to eliminate the varying background and noise simultaneously for multivariate calibration of Fourier transform near infrared (FT-NIR) spectral signals. An ideal spectrum signal prototype was constructed based on the FT-NIR spectrum of fruit sugar content measurement. The performances of wavelet based threshold de-noising approaches via different combinations of wavelet base functions were compared. Three families of wavelet base function (Daubechies, Symlets and Coiflets) were applied to estimate the performance of those wavelet bases and threshold selection rules by a series of experiments. The experimental results show that the best de-noising performance is reached via the combinations of Daubechies 4 or Symlet 4 wavelet base function. Based on the optimization parameter, wavelet regression models for sugar content of pear were also developed and result in a smaller prediction error than a traditional Partial Least Squares Regression (PLSR) mode.
Flexible link functions in nonparametric binary regression with Gaussian process priors.
Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K
2016-09-01
In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.
Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors
Li, Dan; Lin, Lizhen; Dey, Dipak K.
2015-01-01
Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333
Smooth Scalar-on-Image Regression via Spatial Bayesian Variable Selection
Goldsmith, Jeff; Huang, Lei; Crainiceanu, Ciprian M.
2013-01-01
We develop scalar-on-image regression models when images are registered multidimensional manifolds. We propose a fast and scalable Bayes inferential procedure to estimate the image coefficient. The central idea is the combination of an Ising prior distribution, which controls a latent binary indicator map, and an intrinsic Gaussian Markov random field, which controls the smoothness of the nonzero coefficients. The model is fit using a single-site Gibbs sampler, which allows fitting within minutes for hundreds of subjects with predictor images containing thousands of locations. The code is simple and is provided in less than one page in the Appendix. We apply this method to a neuroimaging study where cognitive outcomes are regressed on measures of white matter microstructure at every voxel of the corpus callosum for hundreds of subjects. PMID:24729670
Roper, Jatin; Sinnamon, Mark J.; Coffee, Erin M.; Belmont, Peter; Keung, Lily; Georgeon-Richard, Larissa; Wang, Wei Vivian; Faber, Anthony C.; Yun, Jihye; Yilmaz, Omer H.; Bronson, Roderick T.; Martin, Eric S.; Tsichlis, Philip N.; Hung, Kenneth E.
2014-01-01
PI3K inhibition in combination with other agents has not been studied in the context of PIK3CA wild-type, KRAS mutant cancer. In a screen of phospho-kinases, PI3K inhibition of KRAS mutant colorectal cancer cells activated the MAPK pathway. Combination PI3K/MEK inhibition with NVP-BKM120 and PD-0325901 induced tumor regression in a mouse model of PIK3CA wild-type, KRAS mutant colorectal cancer, which was mediated by inhibition of mTORC1, inhibition of MCL-1, and activation of BIM. These findings implicate mitochondrial-dependent apoptotic mechanisms as determinants for the efficacy of PI3K/MEK inhibition in the treatment of PIK3CA wild-type, KRAS mutant cancer. PMID:24576621
Fathallah, F A; Marras, W S; Parnianpour, M
1999-09-01
Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.
Meraya, Abdulkarim M; Raval, Amit D; Sambamoorthi, Usha
2015-01-29
Little is known about how combinations of chronic conditions in adults affect total health care expenditures. Our objective was to estimate the annual average total expenditures and out-of-pocket spending burden among US adults by combinations of conditions. We conducted a cross-sectional study using 2009 and 2011 data from the Medical Expenditure Panel Survey. The sample consisted of 9,296 adults aged 21 years or older with at least 2 of the following 4 highly prevalent chronic conditions: arthritis, diabetes mellitus, heart disease, and hypertension. Unadjusted and adjusted regression techniques were used to examine the association between chronic condition combinations and log-transformed total expenditures. Logistic regressions were used to analyze the relationship between chronic condition combinations and high out-of-pocket spending burden. Among adults with chronic conditions, adults with all 4 conditions had the highest average total expenditures ($20,016), whereas adults with diabetes/hypertension had the lowest annual total expenditures ($7,116). In adjusted models, adults with diabetes/hypertension and hypertension/arthritis had lower health care expenditures than adults with diabetes/heart disease (P < .001). In adjusted models, adults with all 4 conditions had higher expenditures compared with those with diabetes and heart disease. However, the difference was only marginally significant (P = .04). Among adults with arthritis, diabetes, heart disease, and hypertension, total health care expenditures differed by type of chronic condition combinations. For individuals with multiple chronic conditions, such as heart disease and diabetes, new models of care management are needed to reduce the cost burden on the payers.
NASA Astrophysics Data System (ADS)
Wang, Lunche; Kisi, Ozgur; Zounemat-Kermani, Mohammad; Li, Hui
2017-01-01
Pan evaporation (Ep) plays important roles in agricultural water resources management. One of the basic challenges is modeling Ep using limited climatic parameters because there are a number of factors affecting the evaporation rate. This study investigated the abilities of six different soft computing methods, multi-layer perceptron (MLP), generalized regression neural network (GRNN), fuzzy genetic (FG), least square support vector machine (LSSVM), multivariate adaptive regression spline (MARS), adaptive neuro-fuzzy inference systems with grid partition (ANFIS-GP), and two regression methods, multiple linear regression (MLR) and Stephens and Stewart model (SS) in predicting monthly Ep. Long-term climatic data at various sites crossing a wide range of climates during 1961-2000 are used for model development and validation. The results showed that the models have different accuracies in different climates and the MLP model performed superior to the other models in predicting monthly Ep at most stations using local input combinations (for example, the MAE (mean absolute errors), RMSE (root mean square errors), and determination coefficient (R2) are 0.314 mm/day, 0.405 mm/day and 0.988, respectively for HEB station), while GRNN model performed better in Tibetan Plateau (MAE, RMSE and R2 are 0.459 mm/day, 0.592 mm/day and 0.932, respectively). The accuracies of above models ranked as: MLP, GRNN, LSSVM, FG, ANFIS-GP, MARS and MLR. The overall results indicated that the soft computing techniques generally performed better than the regression methods, but MLR and SS models can be more preferred at some climatic zones instead of complex nonlinear models, for example, the BJ (Beijing), CQ (Chongqing) and HK (Haikou) stations. Therefore, it can be concluded that Ep could be successfully predicted using above models in hydrological modeling studies.
Quantitative Analysis of Land Loss in Coastal Louisiana Using Remote Sensing
NASA Astrophysics Data System (ADS)
Wales, P. M.; Kuszmaul, J.; Roberts, C.
2005-12-01
For the past thirty-five years the land loss along the Louisiana Coast has been recognized as a growing problem. One of the clearest indicators of this land loss is that in 2000 smooth cord grass (spartina alterniflora) was turning brown well before its normal hibernation period. Over 100,000 acres of marsh were affected by the 2000 browning. In 2001 data were collected using low altitude helicopter based transects of the coast, with 7,400 data points being collected by researchers at the USGS, National Wetlands Research Center, and Louisiana Department of Natural Resources. The surveys contained data describing the characteristics of the marsh, including latitude, longitude, marsh condition, marsh color, percent vegetated, and marsh die-back. Creating a model that combines remote sensing images, field data, and statistical analysis to develop a methodology for estimating the margin of error in measurements of coastal land loss (erosion) is the ultimate goal of the study. A model was successfully created using a series of band combinations (used as predictive variables). The most successful band combinations or predictive variables were the braud value [(Sum Visible TM Bands - Sum Infrared TM Bands)/(Sum Visible TM Bands + Sum Infrared TM Bands)], TM band 7/ TM band 2, brightness, NDVI, wetness, vegetation index, and a 7x7 autocovariate nearest neighbor floating window. The model values were used to generate the logistic regression model. A new image was created based on the logistic regression probability equation where each pixel represents the probability of finding water or non-water at that location in each image. Pixels within each image that have a high probability of representing water have a value close to 1 and pixels with a low probability of representing water have a value close to 0. A logistic regression model is proposed that uses seven independent variables. This model yields an accurate classification in 86.5% of the locations considered in the 1997 and 2001 survey locations. When the logistic regression was modeled to the satellite imagery of the entire Louisiana Coast study area a statewide loss was estimated to be 358 mi2 to 368 mi2, from 1997 to 2001, using two different methods for estimating land loss.
NASA Astrophysics Data System (ADS)
Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.
2013-12-01
There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
Jacobs, J V; Horak, F B; Tran, V K; Nutt, J G
2006-01-01
Objectives Clinicians often base the implementation of therapies on the presence of postural instability in subjects with Parkinson's disease (PD). These decisions are frequently based on the pull test from the Unified Parkinson's Disease Rating Scale (UPDRS). We sought to determine whether combining the pull test, the one‐leg stance test, the functional reach test, and UPDRS items 27–29 (arise from chair, posture, and gait) predicts balance confidence and falling better than any test alone. Methods The study included 67 subjects with PD. Subjects performed the one‐leg stance test, the functional reach test, and the UPDRS motor exam. Subjects also responded to the Activities‐specific Balance Confidence (ABC) scale and reported how many times they fell during the previous year. Regression models determined the combination of tests that optimally predicted mean ABC scores or categorised fall frequency. Results When all tests were included in a stepwise linear regression, only gait (UPDRS item 29), the pull test (UPDRS item 30), and the one‐leg stance test, in combination, represented significant predictor variables for mean ABC scores (r2 = 0.51). A multinomial logistic regression model including the one‐leg stance test and gait represented the model with the fewest significant predictor variables that correctly identified the most subjects as fallers or non‐fallers (85% of subjects were correctly identified). Conclusions Multiple balance tests (including the one‐leg stance test, and the gait and pull test items of the UPDRS) that assess different types of postural stress provide an optimal assessment of postural stability in subjects with PD. PMID:16484639
Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.
2017-01-01
Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.
Wang, Ying; Goh, Joshua O; Resnick, Susan M; Davatzikos, Christos
2013-01-01
In this study, we used high-dimensional pattern regression methods based on structural (gray and white matter; GM and WM) and functional (positron emission tomography of regional cerebral blood flow; PET) brain data to identify cross-sectional imaging biomarkers of cognitive performance in cognitively normal older adults from the Baltimore Longitudinal Study of Aging (BLSA). We focused on specific components of executive and memory domains known to decline with aging, including manipulation, semantic retrieval, long-term memory (LTM), and short-term memory (STM). For each imaging modality, brain regions associated with each cognitive domain were generated by adaptive regional clustering. A relevance vector machine was adopted to model the nonlinear continuous relationship between brain regions and cognitive performance, with cross-validation to select the most informative brain regions (using recursive feature elimination) as imaging biomarkers and optimize model parameters. Predicted cognitive scores using our regression algorithm based on the resulting brain regions correlated well with actual performance. Also, regression models obtained using combined GM, WM, and PET imaging modalities outperformed models based on single modalities. Imaging biomarkers related to memory performance included the orbito-frontal and medial temporal cortical regions with LTM showing stronger correlation with the temporal lobe than STM. Brain regions predicting executive performance included orbito-frontal, and occipito-temporal areas. The PET modality had higher contribution to most cognitive domains except manipulation, which had higher WM contribution from the superior longitudinal fasciculus and the genu of the corpus callosum. These findings based on machine-learning methods demonstrate the importance of combining structural and functional imaging data in understanding complex cognitive mechanisms and also their potential usage as biomarkers that predict cognitive status.
NASA Astrophysics Data System (ADS)
Delbari, Masoomeh; Sharifazari, Salman; Mohammadi, Ehsan
2018-02-01
The knowledge of soil temperature at different depths is important for agricultural industry and for understanding climate change. The aim of this study is to evaluate the performance of a support vector regression (SVR)-based model in estimating daily soil temperature at 10, 30 and 100 cm depth at different climate conditions over Iran. The obtained results were compared to those obtained from a more classical multiple linear regression (MLR) model. The correlation sensitivity for the input combinations and periodicity effect were also investigated. Climatic data used as inputs to the models were minimum and maximum air temperature, solar radiation, relative humidity, dew point, and the atmospheric pressure (reduced to see level), collected from five synoptic stations Kerman, Ahvaz, Tabriz, Saghez, and Rasht located respectively in the hyper-arid, arid, semi-arid, Mediterranean, and hyper-humid climate conditions. According to the results, the performance of both MLR and SVR models was quite well at surface layer, i.e., 10-cm depth. However, SVR performed better than MLR in estimating soil temperature at deeper layers especially 100 cm depth. Moreover, both models performed better in humid climate condition than arid and hyper-arid areas. Further, adding a periodicity component into the modeling process considerably improved the models' performance especially in the case of SVR.
A Semiparametric Change-Point Regression Model for Longitudinal Observations.
Xing, Haipeng; Ying, Zhiliang
2012-12-01
Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.
Crane, Paul K; Gibbons, Laura E; Jolley, Lance; van Belle, Gerald
2006-11-01
We present an ordinal logistic regression model for identification of items with differential item functioning (DIF) and apply this model to a Mini-Mental State Examination (MMSE) dataset. We employ item response theory ability estimation in our models. Three nested ordinal logistic regression models are applied to each item. Model testing begins with examination of the statistical significance of the interaction term between ability and the group indicator, consistent with nonuniform DIF. Then we turn our attention to the coefficient of the ability term in models with and without the group term. If including the group term has a marked effect on that coefficient, we declare that it has uniform DIF. We examined DIF related to language of test administration in addition to self-reported race, Hispanic ethnicity, age, years of education, and sex. We used PARSCALE for IRT analyses and STATA for ordinal logistic regression approaches. We used an iterative technique for adjusting IRT ability estimates on the basis of DIF findings. Five items were found to have DIF related to language. These same items also had DIF related to other covariates. The ordinal logistic regression approach to DIF detection, when combined with IRT ability estimates, provides a reasonable alternative for DIF detection. There appear to be several items with significant DIF related to language of test administration in the MMSE. More attention needs to be paid to the specific criteria used to determine whether an item has DIF, not just the technique used to identify DIF.
Peng, Ying; Li, Su-Ning; Pei, Xuexue; Hao, Kun
2018-03-01
Amultivariate regression statisticstrategy was developed to clarify multi-components content-effect correlation ofpanaxginseng saponins extract and predict the pharmacological effect by components content. In example 1, firstly, we compared pharmacological effects between panax ginseng saponins extract and individual saponin combinations. Secondly, we examined the anti-platelet aggregation effect in seven different saponin combinations of ginsenoside Rb1, Rg1, Rh, Rd, Ra3 and notoginsenoside R1. Finally, the correlation between anti-platelet aggregation and the content of multiple components was analyzed by a partial least squares algorithm. In example 2, firstly, 18 common peaks were identified in ten different batches of panax ginseng saponins extracts from different origins. Then, we investigated the anti-myocardial ischemia reperfusion injury effects of the ten different panax ginseng saponins extracts. Finally, the correlation between the fingerprints and the cardioprotective effects was analyzed by a partial least squares algorithm. Both in example 1 and 2, the relationship between the components content and pharmacological effect was modeled well by the partial least squares regression equations. Importantly, the predicted effect curve was close to the observed data of dot marked on the partial least squares regression model. This study has given evidences that themulti-component content is a promising information for predicting the pharmacological effects of traditional Chinese medicine.
Development and validation of prognostic models in metastatic breast cancer: a GOCS study.
Rabinovich, M; Vallejo, C; Bianco, A; Perez, J; Machiavelli, M; Leone, B; Romero, A; Rodriguez, R; Cuevas, M; Dansky, C
1992-01-01
The significance of several prognostic factors and the magnitude of their influence on response rate and survival were assessed by means of uni- and multivariate analyses in 362 patients with stage IV (UICC) breast carcinoma receiving combination chemotherapy as first systemic treatment over an 8-year period. Univariate analyses identified performance status and prior adjuvant radiotherapy as predictors of objective regression (OR), whereas the performance status, prior chemotherapy and radiotherapy (adjuvants), white blood cells count, SGOT and SGPT levels, and metastatic pattern were significantly correlated to survival. In multivariate analyses favorable characteristics associated to OR were prior adjuvant radiotherapy, no prior chemotherapy and postmenopausal status. Regarding survival, the performance status and visceral involvement were selected by the Cox model. The predictive accuracy of the logistic and the proportional hazards models was retrospectively tested in the training sample, and prospectively in a new population of 126 patients also receiving combined chemotherapy as first treatment for metastatic breast cancer. A certain overfitting to data in the training sample was observed with the regression model for response. However, the discriminative ability of the Cox model for survival was clearly confirmed.
Wang, Yubo; Veluvolu, Kalyana C
2017-06-14
It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J.; Moon, T.J.; Howell, J.R.
This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less
NASA Astrophysics Data System (ADS)
Yang, Fan; Xue, Lianqing; Zhang, Luochen; Chen, Xinfang; Chi, Yixia
2017-12-01
This article aims to explore the adaptive utilization strategies of flow regime versus traditional practices in the context of climate change and human activities in the arid area. The study presents quantitative analysis of climatic and anthropogenic factors to streamflow alteration in the Tarim River Basin (TRB) using the Budyko method and adaptive utilization strategies to eco-hydrological regime by comparing the applicability between autoregressive moving average model (ARMA) model and combined regression model. Our results suggest that human activities played a dominant role in streamflow deduction in the mainstream with contribution of 120.7%~190.1%. While in the headstreams, climatic variables were the primary determinant of streamflow by 56.5~152.6% of the increase. The comparison revealed that combined regression model performed better than ARMA model with the qualified rate of 80.49~90.24%. Based on the forecasts of streamflow for different purposes, the adaptive utilization scheme of water flow is established from the perspective of time and space. Our study presents an effective water resources scheduling scheme for the ecological environment and provides references for ecological protection and water allocation in the arid area.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization. The applicability and accuracy of the regional regression equations depend on the basin characteristics measured for an ungaged location on a stream being within range of those used to develop the equations.
Binary logistic regression-Instrument for assessing museum indoor air impact on exhibits.
Bucur, Elena; Danet, Andrei Florin; Lehr, Carol Blaziu; Lehr, Elena; Nita-Lazar, Mihai
2017-04-01
This paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The prediction of the impact on the exhibits during certain pollution scenarios (environmental impact) was calculated by a mathematical model based on the binary logistic regression; it allows the identification of those environmental parameters from a multitude of possible parameters with a significant impact on exhibitions and ranks them according to their severity effect. Air quality (NO 2 , SO 2 , O 3 and PM 2.5 ) and microclimate parameters (temperature, humidity) monitoring data from a case study conducted within exhibition and storage spaces of the Romanian National Aviation Museum Bucharest have been used for developing and validating the binary logistic regression method and the mathematical model. The logistic regression analysis was used on 794 data combinations (715 to develop of the model and 79 to validate it) by a Statistical Package for Social Sciences (SPSS 20.0). The results from the binary logistic regression analysis demonstrated that from six parameters taken into consideration, four of them present a significant effect upon exhibits in the following order: O 3 >PM 2.5 >NO 2 >humidity followed at a significant distance by the effects of SO 2 and temperature. The mathematical model, developed in this study, correctly predicted 95.1 % of the cumulated effect of the environmental parameters upon the exhibits. Moreover, this model could also be used in the decisional process regarding the preventive preservation measures that should be implemented within the exhibition space. The paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The mathematical model developed on the environmental parameters analyzed by the binary logistic regression method could be useful in a decision-making process establishing the best measures for pollution reduction and preventive preservation of exhibits.
Ye, Jiang-Feng; Zhao, Yu-Xin; Ju, Jian; Wang, Wei
2017-10-01
To discuss the value of the Bedside Index for Severity in Acute Pancreatitis (BISAP), Modified Early Warning Score (MEWS), serum Ca2+, similarly hereinafter, and red cell distribution width (RDW) for predicting the severity grade of acute pancreatitis and to develop and verify a more accurate scoring system to predict the severity of AP. In 302 patients with AP, we calculated BISAP and MEWS scores and conducted regression analyses on the relationships of BISAP scoring, RDW, MEWS, and serum Ca2+ with the severity of AP using single-factor logistics. The variables with statistical significance in the single-factor logistic regression were used in a multi-factor logistic regression model; forward stepwise regression was used to screen variables and build a multi-factor prediction model. A receiver operating characteristic curve (ROC curve) was constructed, and the significance of multi- and single-factor prediction models in predicting the severity of AP using the area under the ROC curve (AUC) was evaluated. The internal validity of the model was verified through bootstrapping. Among 302 patients with AP, 209 had mild acute pancreatitis (MAP) and 93 had severe acute pancreatitis (SAP). According to single-factor logistic regression analysis, we found that BISAP, MEWS and serum Ca2+ are prediction indexes of the severity of AP (P-value<0.001), whereas RDW is not a prediction index of AP severity (P-value>0.05). The multi-factor logistic regression analysis showed that BISAP and serum Ca2+ are independent prediction indexes of AP severity (P-value<0.001), and MEWS is not an independent prediction index of AP severity (P-value>0.05); BISAP is negatively related to serum Ca2+ (r=-0.330, P-value<0.001). The constructed model is as follows: ln()=7.306+1.151*BISAP-4.516*serum Ca2+. The predictive ability of each model for SAP follows the order of the combined BISAP and serum Ca2+ prediction model>Ca2+>BISAP. There is no statistical significance for the predictive ability of BISAP and serum Ca2+ (P-value>0.05); however, there is remarkable statistical significance for the predictive ability using the newly built prediction model as well as BISAP and serum Ca2+ individually (P-value<0.01). Verification of the internal validity of the models by bootstrapping is favorable. BISAP and serum Ca2+ have high predictive value for the severity of AP. However, the model built by combining BISAP and serum Ca2+ is remarkably superior to those of BISAP and serum Ca2+ individually. Furthermore, this model is simple, practical and appropriate for clinical use. Copyright © 2016. Published by Elsevier Masson SAS.
Shao, Yeqin; Gao, Yaozong; Wang, Qian; Yang, Xin; Shen, Dinggang
2015-01-01
Automatic and accurate segmentation of the prostate and rectum in planning CT images is a challenging task due to low image contrast, unpredictable organ (relative) position, and uncertain existence of bowel gas across different patients. Recently, regression forest was adopted for organ deformable segmentation on 2D medical images by training one landmark detector for each point on the shape model. However, it seems impractical for regression forest to guide 3D deformable segmentation as a landmark detector, due to large number of vertices in the 3D shape model as well as the difficulty in building accurate 3D vertex correspondence for each landmark detector. In this paper, we propose a novel boundary detection method by exploiting the power of regression forest for prostate and rectum segmentation. The contributions of this paper are as follows: 1) we introduce regression forest as a local boundary regressor to vote the entire boundary of a target organ, which avoids training a large number of landmark detectors and building an accurate 3D vertex correspondence for each landmark detector; 2) an auto-context model is integrated with regression forest to improve the accuracy of the boundary regression; 3) we further combine a deformable segmentation method with the proposed local boundary regressor for the final organ segmentation by integrating organ shape priors. Our method is evaluated on a planning CT image dataset with 70 images from 70 different patients. The experimental results show that our proposed boundary regression method outperforms the conventional boundary classification method in guiding the deformable model for prostate and rectum segmentations. Compared with other state-of-the-art methods, our method also shows a competitive performance. PMID:26439938
Pattullo, Venessa; Thein, Hla-Hla; Heathcote, Elizabeth Jenny; Guindi, Maha
2012-09-01
A fall in hepatic fibrosis stage may be observed in patients with chronic hepatitis C (CHC); however, parenchymal architectural changes may also signify hepatic remodelling associated with fibrosis regression. The aim of this study was to utilize semiquantitative and qualitative methods to report the prevalence and factors associated with fibrosis regression in CHC. Paired liver biopsies were scored for fibrosis (Ishak), and for the presence of eight qualitative features of parenchymal remodelling, to derive a qualitative regression score (QR score). Combined fibrosis regression was defined as ≥2-stage fall in Ishak stage (Reg-I) or <2-stage fall in Ishak stage with a rise in QR score (Reg-Qual). Among 159 patients (biopsy interval 5.4 ± 3.1 years), Reg-I was observed in 12 (7.5%) and Reg-Qual in 26 (16.4%) patients. The combined diagnostic criteria increased the diagnosis rate for fibrosis regression (38 patients, 23.9%) compared with use of Reg-I alone (P < 0.001). Combined fibrosis regression was observed in nine patients (50%) who achieved sustained virological response (SVR), and in 29 of 141 (21%) patients despite persistent viraemia. SVR was the only clinical factor associated independently with combined fibrosis regression (odds ratio 3.05). The combination of semiquantitative measures and qualitative features aids the identification of fibrosis regression in CHC. © 2012 Blackwell Publishing Ltd.
Guisan, Antoine; Edwards, T.C.; Hastie, T.
2002-01-01
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Astuti, H. N.; Saputro, D. R. S.; Susanti, Y.
2017-06-01
MGWR model is combination of linear regression model and geographically weighted regression (GWR) model, therefore, MGWR model could produce parameter estimation that had global parameter estimation, and other parameter that had local parameter in accordance with its observation location. The linkage between locations of the observations expressed in specific weighting that is adaptive bi-square. In this research, we applied MGWR model with weighted adaptive bi-square for case of DHF in Surakarta based on 10 factors (variables) that is supposed to influence the number of people with DHF. The observation unit in the research is 51 urban villages and the variables are number of inhabitants, number of houses, house index, many public places, number of healthy homes, number of Posyandu, area width, level population density, welfare of the family, and high-region. Based on this research, we obtained 51 MGWR models. The MGWR model were divided into 4 groups with significant variable is house index as a global variable, an area width as a local variable and the remaining variables vary in each. Global variables are variables that significantly affect all locations, while local variables are variables that significantly affect a specific location.
Area-to-point regression kriging for pan-sharpening
NASA Astrophysics Data System (ADS)
Wang, Qunming; Shi, Wenzhong; Atkinson, Peter M.
2016-04-01
Pan-sharpening is a technique to combine the fine spatial resolution panchromatic (PAN) band with the coarse spatial resolution multispectral bands of the same satellite to create a fine spatial resolution multispectral image. In this paper, area-to-point regression kriging (ATPRK) is proposed for pan-sharpening. ATPRK considers the PAN band as the covariate. Moreover, ATPRK is extended with a local approach, called adaptive ATPRK (AATPRK), which fits a regression model using a local, non-stationary scheme such that the regression coefficients change across the image. The two geostatistical approaches, ATPRK and AATPRK, were compared to the 13 state-of-the-art pan-sharpening approaches summarized in Vivone et al. (2015) in experiments on three separate datasets. ATPRK and AATPRK produced more accurate pan-sharpened images than the 13 benchmark algorithms in all three experiments. Unlike the benchmark algorithms, the two geostatistical solutions precisely preserved the spectral properties of the original coarse data. Furthermore, ATPRK can be enhanced by a local scheme in AATRPK, in cases where the residuals from a global regression model are such that their spatial character varies locally.
Shillcutt, Samuel D; LeFevre, Amnesty E; Fischer-Walker, Christa L; Taneja, Sunita; Black, Robert E; Mazumder, Sarmila
2017-01-01
This study evaluates the cost-effectiveness of the DAZT program for scaling up treatment of acute child diarrhea in Gujarat India using a net-benefit regression framework. Costs were calculated from societal and caregivers' perspectives and effectiveness was assessed in terms of coverage of zinc and both zinc and Oral Rehydration Salt. Regression models were tested in simple linear regression, with a specified set of covariates, and with a specified set of covariates and interaction terms using linear regression with endogenous treatment effects was used as the reference case. The DAZT program was cost-effective with over 95% certainty above $5.50 and $7.50 per appropriately treated child in the unadjusted and adjusted models respectively, with specifications including interaction terms being cost-effective with 85-97% certainty. Findings from this study should be combined with other evidence when considering decisions to scale up programs such as the DAZT program to promote the use of ORS and zinc to treat child diarrhea.
Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers
NASA Astrophysics Data System (ADS)
Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.
A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.
Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies
NASA Astrophysics Data System (ADS)
Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu
2015-09-01
Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.
A statistical method for predicting seizure onset zones from human single-neuron recordings
NASA Astrophysics Data System (ADS)
Valdez, André B.; Hickman, Erin N.; Treiman, David M.; Smith, Kris A.; Steinmetz, Peter N.
2013-02-01
Objective. Clinicians often use depth-electrode recordings to localize human epileptogenic foci. To advance the diagnostic value of these recordings, we applied logistic regression models to single-neuron recordings from depth-electrode microwires to predict seizure onset zones (SOZs). Approach. We collected data from 17 epilepsy patients at the Barrow Neurological Institute and developed logistic regression models to calculate the odds of observing SOZs in the hippocampus, amygdala and ventromedial prefrontal cortex, based on statistics such as the burst interspike interval (ISI). Main results. Analysis of these models showed that, for a single-unit increase in burst ISI ratio, the left hippocampus was approximately 12 times more likely to contain a SOZ; and the right amygdala, 14.5 times more likely. Our models were most accurate for the hippocampus bilaterally (at 85% average sensitivity), and performance was comparable with current diagnostics such as electroencephalography. Significance. Logistic regression models can be combined with single-neuron recording to predict likely SOZs in epilepsy patients being evaluated for resective surgery, providing an automated source of clinically useful information.
An Analysis of the Number of Medical Malpractice Claims and Their Amounts
Bonetti, Marco; Cirillo, Pasquale; Musile Tanzi, Paola; Trinchero, Elisabetta
2016-01-01
Starting from an extensive database, pooling 9 years of data from the top three insurance brokers in Italy, and containing 38125 reported claims due to alleged cases of medical malpractice, we use an inhomogeneous Poisson process to model the number of medical malpractice claims in Italy. The intensity of the process is allowed to vary over time, and it depends on a set of covariates, like the size of the hospital, the medical department and the complexity of the medical operations performed. We choose the combination medical department by hospital as the unit of analysis. Together with the number of claims, we also model the associated amounts paid by insurance companies, using a two-stage regression model. In particular, we use logistic regression for the probability that a claim is closed with a zero payment, whereas, conditionally on the fact that an amount is strictly positive, we make use of lognormal regression to model it as a function of several covariates. The model produces estimates and forecasts that are relevant to both insurance companies and hospitals, for quality assurance, service improvement and cost reduction. PMID:27077661
Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bramer, Lisa M.; Rounds, J.; Burleyson, C. D.
Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions were examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and combinations of predictive variables were examined. A penalized logistic regression model which wasmore » fit at the operation-zone level was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at various time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. In conclusion, the methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.« less
Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days
Bramer, Lisa M.; Rounds, J.; Burleyson, C. D.; ...
2017-09-22
Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions were examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and combinations of predictive variables were examined. A penalized logistic regression model which wasmore » fit at the operation-zone level was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at various time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. In conclusion, the methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.« less
Molecular markers of neuropsychological functioning and Alzheimer's disease.
Edwards, Melissa; Balldin, Valerie Hobson; Hall, James; O'Bryant, Sid
2015-03-01
The current project sought to examine molecular markers of neuropsychological functioning among elders with and without Alzheimer's disease (AD) and determine the predictive ability of combined molecular markers and select neuropsychological tests in detecting disease presence. Data were analyzed from 300 participants (n = 150, AD and n = 150, controls) enrolled in the Texas Alzheimer's Research and Care Consortium. Linear regression models were created to examine the link between the top five molecular markers from our AD blood profile and neuropsychological test scores. Logistical regressions were used to predict AD presence using serum biomarkers in combination with select neuropsychological measures. Using the neuropsychological test with the least amount of variance overlap with the molecular markers, the combined neuropsychological test and molecular markers was highly accurate in detecting AD presence. This work provides the foundation for the generation of a point-of-care device that can be used to screen for AD.
NASA Astrophysics Data System (ADS)
Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.
2017-06-01
The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.
Akkus, Zeki; Camdeviren, Handan; Celik, Fatma; Gur, Ali; Nas, Kemal
2005-09-01
To determine the risk factors of osteoporosis using a multiple binary logistic regression method and to assess the risk variables for osteoporosis, which is a major and growing health problem in many countries. We presented a case-control study, consisting of 126 postmenopausal healthy women as control group and 225 postmenopausal osteoporotic women as the case group. The study was carried out in the Department of Physical Medicine and Rehabilitation, Dicle University, Diyarbakir, Turkey between 1999-2002. The data from the 351 participants were collected using a standard questionnaire that contains 43 variables. A multiple logistic regression model was then used to evaluate the data and to find the best regression model. We classified 80.1% (281/351) of the participants using the regression model. Furthermore, the specificity value of the model was 67% (84/126) of the control group while the sensitivity value was 88% (197/225) of the case group. We found the distribution of residual values standardized for final model to be exponential using the Kolmogorow-Smirnow test (p=0.193). The receiver operating characteristic curve was found successful to predict patients with risk for osteoporosis. This study suggests that low levels of dietary calcium intake, physical activity, education, and longer duration of menopause are independent predictors of the risk of low bone density in our population. Adequate dietary calcium intake in combination with maintaining a daily physical activity, increasing educational level, decreasing birth rate, and duration of breast-feeding may contribute to healthy bones and play a role in practical prevention of osteoporosis in Southeast Anatolia. In addition, the findings of the present study indicate that the use of multivariate statistical method as a multiple logistic regression in osteoporosis, which maybe influenced by many variables, is better than univariate statistical evaluation.
Combined mTOR and MEK inhibition is an effective therapy in a novel mouse model for angiosarcoma
Chadwick, Michelle L.; Lane, Adam; Thomas, Dana; Smith, Amanda R.; White, Angela R.; Davidson, Dominique; Feng, Yuxin; Boscolo, Elisa; Zheng, Yi; Adams, Denise M.; Gupta, Anita; Veillette, André; Chow, Lionel M.L.
2018-01-01
Angiosarcoma is an aggressive malignancy of vascular origin that occurs de novo or in the context of previous cancer therapy. Despite multi-modal aggressive treatment including surgical resection, chemotherapy, and radiation, five-year overall survival remains poor at 35%. Due to its rarity, little is known about its molecular pathology and clinical trials have been extremely difficult to conduct. Development of animal models for rare diseases like angiosarcoma is critical to improve our understanding of tumorigenesis and to test novel treatment regimens. A genetically engineered mouse model for angiosarcoma was generated by conditional deletion of Trp53, Pten, and Ptpn12 in endothelial cells. Tumors arising from these mice recapitulate the histology and molecular pathology of the human disease including hyperactivation of the PI3K/mTOR and MAPK signaling pathways. Treatment of tumor-bearing mice with mTOR or MEK inhibitors effectively inactivated signaling and resulted in reduced proliferation and elevated apoptosis leading to tumor regression. The effect of treatment on tumor growth was transient and proliferation was restored after a period of dormancy. However, combined inhibition of mTOR and MEK resulted in profound tumor regression which was sustained for the duration of treatment. These results suggest that angiosarcoma may be effectively treated by this drug combination. PMID:29872503
Scalable Regression Tree Learning on Hadoop using OpenPlanet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Wei; Simmhan, Yogesh; Prasanna, Viktor
As scientific and engineering domains attempt to effectively analyze the deluge of data arriving from sensors and instruments, machine learning is becoming a key data mining tool to build prediction models. Regression tree is a popular learning model that combines decision trees and linear regression to forecast numerical target variables based on a set of input features. Map Reduce is well suited for addressing such data intensive learning applications, and a proprietary regression tree algorithm, PLANET, using MapReduce has been proposed earlier. In this paper, we describe an open source implement of this algorithm, OpenPlanet, on the Hadoop framework usingmore » a hybrid approach. Further, we evaluate the performance of OpenPlanet using realworld datasets from the Smart Power Grid domain to perform energy use forecasting, and propose tuning strategies of Hadoop parameters to improve the performance of the default configuration by 75% for a training dataset of 17 million tuples on a 64-core Hadoop cluster on FutureGrid.« less
Remote sensing of PM2.5 from ground-based optical measurements
NASA Astrophysics Data System (ADS)
Li, S.; Joseph, E.; Min, Q.
2014-12-01
Remote sensing of particulate matter concentration with aerodynamic diameter smaller than 2.5 um(PM2.5) by using ground-based optical measurements of aerosols is investigated based on 6 years of hourly average measurements of aerosol optical properties, PM2.5, ceilometer backscatter coefficients and meteorological factors from Howard University Beltsville Campus facility (HUBC). The accuracy of quantitative retrieval of PM2.5 using aerosol optical depth (AOD) is limited due to changes in aerosol size distribution and vertical distribution. In this study, ceilometer backscatter coefficients are used to provide vertical information of aerosol. It is found that the PM2.5-AOD ratio can vary largely for different aerosol vertical distributions. The ratio is also sensitive to mode parameters of bimodal lognormal aerosol size distribution when the geometric mean radius for the fine mode is small. Using two Angstrom exponents calculated at three wavelengths of 415, 500, 860nm are found better representing aerosol size distributions than only using one Angstrom exponent. A regression model is proposed to assess the impacts of different factors on the retrieval of PM2.5. Compared to a simple linear regression model, the new model combining AOD and ceilometer backscatter can prominently improve the fitting of PM2.5. The contribution of further introducing Angstrom coefficients is apparent. Using combined measurements of AOD, ceilometer backscatter, Angstrom coefficients and meteorological parameters in the regression model can get a correlation coefficient of 0.79 between fitted and expected PM2.5.
Yalcin, Semra; Leroux, Shawn James
2018-04-14
Land-cover and climate change are two main drivers of changes in species ranges. Yet, the majority of studies investigating the impacts of global change on biodiversity focus on one global change driver and usually use simulations to project biodiversity responses to future conditions. We conduct an empirical test of the relative and combined effects of land-cover and climate change on species occurrence changes. Specifically, we examine whether observed local colonization and extinctions of North American birds between 1981-1985 and 2001-2005 are correlated with land-cover and climate change and whether bird life history and ecological traits explain interspecific variation in observed occurrence changes. We fit logistic regression models to test the impact of physical land-cover change, changes in net primary productivity, winter precipitation, mean summer temperature, and mean winter temperature on the probability of Ontario breeding bird local colonization and extinction. Models with climate change, land-cover change, and the combination of these two drivers were the top ranked models of local colonization for 30%, 27%, and 29% of species, respectively. Conversely, models with climate change, land-cover change, and the combination of these two drivers were the top ranked models of local extinction for 61%, 7%, and 9% of species, respectively. The quantitative impacts of land-cover and climate change variables also vary among bird species. We then fit linear regression models to test whether the variation in regional colonization and extinction rate could be explained by mean body mass, migratory strategy, and habitat preference of birds. Overall, species traits were weakly correlated with heterogeneity in species occurrence changes. We provide empirical evidence showing that land-cover change, climate change, and the combination of multiple global change drivers can differentially explain observed species local colonization and extinction. © 2018 John Wiley & Sons Ltd.
Yelland, Lisa N; Salter, Amy B; Ryan, Philip
2011-10-15
Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.
NASA Astrophysics Data System (ADS)
Duc, Hiep Nguyen; Rivett, Kelly; MacSween, Katrina; Le-Anh, Linh
2017-01-01
Rainfall in New South Wales (NSW), located in the southeast of the Australian continent, is known to be influenced by four major climate drivers: the El Niño/Southern Oscillation (ENSO), the Interdecadal Pacific Oscillation (IPO), the Southern Annular Mode (SAM) and the Indian Ocean Dipole (IOD). Many studies have shown the influences of ENSO, IPO modulation, SAM and IOD on rainfall in Australia and on southeast Australia in particular. However, only limited work has been undertaken using a multiple regression framework to examine the extent of the combined effect of these climate drivers on rainfall. This paper analysed the role of these combined climate drivers and their interaction on the rainfall in NSW using Bayesian Model Averaging (BMA) to account for model uncertainty by considering each of the linear models across the whole model space which is equal to the set of all possible combinations of predictors to find the model posterior probabilities and their expected predictor coefficients. Using BMA for linear regression models, we are able to corroborate and confirm the results from many previous studies. In addition, the method gives the ranking order of importance and the probability of the association of each of the climate drivers and their interaction on the rainfall at a site. The ability to quantify the relative contribution of the climate drivers offers the key to understand the complex interaction of drivers on rainfall, or lack of rainfall in a region, such as the three big droughts in southeastern Australia which have been the subject of discussion and debate recently on their causes.
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets.
Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%.
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets
Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%. PMID:29209363
Ren, Y Y; Zhou, L C; Yang, L; Liu, P Y; Zhao, B W; Liu, H X
2016-09-01
The paper highlights the use of the logistic regression (LR) method in the construction of acceptable statistically significant, robust and predictive models for the classification of chemicals according to their aquatic toxic modes of action. Essentials accounting for a reliable model were all considered carefully. The model predictors were selected by stepwise forward discriminant analysis (LDA) from a combined pool of experimental data and chemical structure-based descriptors calculated by the CODESSA and DRAGON software packages. Model predictive ability was validated both internally and externally. The applicability domain was checked by the leverage approach to verify prediction reliability. The obtained models are simple and easy to interpret. In general, LR performs much better than LDA and seems to be more attractive for the prediction of the more toxic compounds, i.e. compounds that exhibit excess toxicity versus non-polar narcotic compounds and more reactive compounds versus less reactive compounds. In addition, model fit and regression diagnostics was done through the influence plot which reflects the hat-values, studentized residuals, and Cook's distance statistics of each sample. Overdispersion was also checked for the LR model. The relationships between the descriptors and the aquatic toxic behaviour of compounds are also discussed.
NASA Astrophysics Data System (ADS)
Luo, Shezhou; Wang, Cheng; Xi, Xiaohuan; Pan, Feifei; Qian, Mingjie; Peng, Dailiang; Nie, Sheng; Qin, Haiming; Lin, Yi
2017-06-01
Wetland biomass is essential for monitoring the stability and productivity of wetland ecosystems. Conventional field methods to measure or estimate wetland biomass are accurate and reliable, but expensive, time consuming and labor intensive. This research explored the potential for estimating wetland reed biomass using a combination of airborne discrete-return Light Detection and Ranging (LiDAR) and hyperspectral data. To derive the optimal predictor variables of reed biomass, a range of LiDAR and hyperspectral metrics at different spatial scales were regressed against the field-observed biomasses. The results showed that the LiDAR-derived H_p99 (99th percentile of the LiDAR height) and hyperspectral-calculated modified soil-adjusted vegetation index (MSAVI) were the best metrics for estimating reed biomass using the single regression model. Although the LiDAR data yielded a higher estimation accuracy compared to the hyperspectral data, the combination of LiDAR and hyperspectral data produced a more accurate prediction model for reed biomass (R2 = 0.648, RMSE = 167.546 g/m2, RMSEr = 20.71%) than LiDAR data alone. Thus, combining LiDAR data with hyperspectral data has a great potential for improving the accuracy of aboveground biomass estimation.
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Raval, Amit D.; Sambamoorthi, Usha
2015-01-01
Introduction Little is known about how combinations of chronic conditions in adults affect total health care expenditures. Our objective was to estimate the annual average total expenditures and out-of-pocket spending burden among US adults by combinations of conditions. Methods We conducted a cross-sectional study using 2009 and 2011 data from the Medical Expenditure Panel Survey. The sample consisted of 9,296 adults aged 21 years or older with at least 2 of the following 4 highly prevalent chronic conditions: arthritis, diabetes mellitus, heart disease, and hypertension. Unadjusted and adjusted regression techniques were used to examine the association between chronic condition combinations and log-transformed total expenditures. Logistic regressions were used to analyze the relationship between chronic condition combinations and high out-of-pocket spending burden. Results Among adults with chronic conditions, adults with all 4 conditions had the highest average total expenditures ($20,016), whereas adults with diabetes/hypertension had the lowest annual total expenditures ($7,116). In adjusted models, adults with diabetes/hypertension and hypertension/arthritis had lower health care expenditures than adults with diabetes/heart disease (P < .001). In adjusted models, adults with all 4 conditions had higher expenditures compared with those with diabetes and heart disease. However, the difference was only marginally significant (P = .04). Conclusion Among adults with arthritis, diabetes, heart disease, and hypertension, total health care expenditures differed by type of chronic condition combinations. For individuals with multiple chronic conditions, such as heart disease and diabetes, new models of care management are needed to reduce the cost burden on the payers. PMID:25633487
Study on Hyperspectral Characteristics and Estimation Model of Soil Mercury Content
NASA Astrophysics Data System (ADS)
Liu, Jinbao; Dong, Zhenyu; Sun, Zenghui; Ma, Hongchao; Shi, Lei
2017-12-01
In this study, the mercury content of 44 soil samples in Guan Zhong area of Shaanxi Province was used as the data source, and the reflectance spectrum of soil was obtained by ASD Field Spec HR (350-2500 nm) Comparing the reflection characteristics of different contents and the effect of different pre-treatment methods on the establishment of soil heavy metal spectral inversion model. The first order differential, second order differential and reflectance logarithmic transformations were carried out after the pre-treatment of NOR, MSC and SNV, and the sensitive bands of reflectance and mercury content in different mathematical transformations were selected. A hyperspectral estimation model is established by regression method. The results of chemical analysis show that there is a serious Hg pollution in the study area. The results show that: (1) the reflectivity decreases with the increase of mercury content, and the sensitive regions of mercury are located at 392 ~ 455nm, 923nm ~ 1040nm and 1806nm ~ 1969nm. (2) The combination of NOR, MSC and SNV transformations combined with differential transformations can improve the information of heavy metal elements in the soil, and the combination of high correlation band can improve the stability and prediction ability of the model. (3) The partial least squares regression model based on the logarithm of the original reflectance is better and the precision is higher, Rc2 = 0.9912, RMSEC = 0.665; Rv2 = 0.9506, RMSEP = 1.93, which can achieve the mercury content in this region Quick forecast.
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Peak-flow characteristics of Virginia streams
Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute
2011-01-01
Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.
Mainou, Maria; Madenidou, Anastasia-Vasiliki; Liakos, Aris; Paschos, Paschalis; Karagiannis, Thomas; Bekiari, Eleni; Vlachaki, Efthymia; Wang, Zhen; Murad, Mohammad Hassan; Kumar, Shaji; Tsapas, Apostolos
2017-06-01
We performed a systematic review and meta-regression analysis of randomized control trials to investigate the association between response to initial treatment and survival outcomes in patients with newly diagnosed multiple myeloma (MM). Response outcomes included complete response (CR) and the combined outcome of CR or very good partial response (VGPR), while survival outcomes were overall survival (OS) and progression-free survival (PFS). We used random-effect meta-regression models and conducted sensitivity analyses based on definition of CR and study quality. Seventy-two trials were included in the systematic review, 63 of which contributed data in meta-regression analyses. There was no association between OS and CR in patients without autologous stem cell transplant (ASCT) (regression coefficient: .02, 95% confidence interval [CI] -0.06, 0.10), in patients undergoing ASCT (-.11, 95% CI -0.44, 0.22) and in trials comparing ASCT with non-ASCT patients (.04, 95% CI -0.29, 0.38). Similarly, OS did not correlate with the combined metric of CR or VGPR, and no association was evident between response outcomes and PFS. Sensitivity analyses yielded similar results. This meta-regression analysis suggests that there is no association between conventional response outcomes and survival in patients with newly diagnosed MM. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Garnier, Alain; Gaillet, Bruno
2015-12-01
Not so many fermentation mathematical models allow analytical solutions of batch process dynamics. The most widely used is the combination of the logistic microbial growth kinetics with Luedeking-Piret bioproduct synthesis relation. However, the logistic equation is principally based on formalistic similarities and only fits a limited range of fermentation types. In this article, we have developed an analytical solution for the combination of Monod growth kinetics with Luedeking-Piret relation, which can be identified by linear regression and used to simulate batch fermentation evolution. Two classical examples are used to show the quality of fit and the simplicity of the method proposed. A solution for the combination of Haldane substrate-limited growth model combined with Luedeking-Piret relation is also provided. These models could prove useful for the analysis of fermentation data in industry as well as academia. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhan, Liwei; Li, Chengwei
2017-02-01
A hybrid PSO-SVM-based model is proposed to predict the friction coefficient between aircraft tire and coating. The presented hybrid model combines a support vector machine (SVM) with particle swarm optimization (PSO) technique. SVM has been adopted to solve regression problems successfully. Its regression accuracy is greatly related to optimizing parameters such as the regularization constant C , the parameter gamma γ corresponding to RBF kernel and the epsilon parameter \\varepsilon in the SVM training procedure. However, the friction coefficient which is predicted based on SVM has yet to be explored between aircraft tire and coating. The experiment reveals that drop height and tire rotational speed are the factors affecting friction coefficient. Bearing in mind, the friction coefficient can been predicted using the hybrid PSO-SVM-based model by the measured friction coefficient between aircraft tire and coating. To compare regression accuracy, a grid search (GS) method and a genetic algorithm (GA) are used to optimize the relevant parameters (C , γ and \\varepsilon ), respectively. The regression accuracy could be reflected by the coefficient of determination ({{R}2} ). The result shows that the hybrid PSO-RBF-SVM-based model has better accuracy compared with the GS-RBF-SVM- and GA-RBF-SVM-based models. The agreement of this model (PSO-RBF-SVM) with experiment data confirms its good performance.
Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael
2014-01-01
Background: Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. Objectives: We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. Methods: We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. Results: The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Conclusions: Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data. Citation: Adam-Poupart A, Brand A, Fournier M, Jerrett M, Smargiassi A. 2014. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy–LUR approaches. Environ Health Perspect 122:970–976; http://dx.doi.org/10.1289/ehp.1306566 PMID:24879650
Developing a dengue forecast model using machine learning: A case study in China
Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun
2017-01-01
Background In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Methodology/Principal findings Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011–2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. Conclusion and significance The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics. PMID:29036169
Spatial prediction of soil texture in region Centre (France) from summary data
NASA Astrophysics Data System (ADS)
Dobarco, Mercedes Roman; Saby, Nicolas; Paroissien, Jean-Baptiste; Orton, Tom G.
2015-04-01
Soil texture is a key controlling factor of important soil functions like water and nutrient holding capacity, retention of pollutants, drainage, soil biodiversity, and C cycling. High resolution soil texture maps enhance our understanding of the spatial distribution of soil properties and provide valuable information for decision making and crop management, environmental protection, and hydrological planning. We predicted the soil texture of agricultural topsoils in the Region Centre (France) combining regression and area-to-point kriging. Soil texture data was collected from the French soil-test database (BDAT), which is populated with soil analysis performed by farmers' demand. To protect the anonymity of the farms the data was treated by commune. In a first step, summary statistics of environmental covariates by commune were used to develop prediction models with Cubist, boosted regression trees, and random forests. In a second step the residuals of each individual observation were summarized by commune and kriged following the method developed by Orton et al. (2012). This approach allowed to include non-linear relationships among covariates and soil texture while accounting for the uncertainty on areal means in the area-to-point kriging step. Independent validation of the models was done using data from the systematic soil monitoring network of French soils. Future work will compare the performance of these models with a non-stationary variance geostatistical model using the most important covariates and summary statistics of texture data. The results will inform on whether the later and statistically more-challenging approach improves significantly texture predictions or whether the more simple area-to-point regression kriging can offer satisfactory results. The application of area-to-point regression kriging at national level using BDAT data has the potential to improve soil texture predictions for agricultural topsoils, especially when combined with existing maps (i.e., model ensemble).
Co-occurring risk factors for current cigarette smoking in a U.S. nationally representative sample
Higgins, Stephen T.; Kurti, Allison N.; Redner, Ryan; White, Thomas J.; Keith, Diana R.; Gaalema, Diann E.; Sprague, Brian L.; Stanton, Cassandra A.; Roberts, Megan E.; Doogan, Nathan J.; Priest, Jeff S.
2016-01-01
Introduction Relatively little has been reported characterizing cumulative risk associated with co-occurring risk factors for cigarette smoking. The purpose of the present study was to address that knowledge gap in a U.S. nationally representative sample. Methods Data were obtained from 114,426 adults (≥ 18 years) in the U.S. National Survey on Drug Use and Health (years 2011–13). Multiple logistic regression and classification and regression tree (CART) modeling were used to examine risk of current smoking associated with eight co-occurring risk factors (age, gender, race/ethnicity, educational attainment, poverty, drug abuse/dependence, alcohol abuse/dependence, mental illness). Results Each of these eight risk factors was independently associated with significant increases in the odds of smoking when concurrently present in a multiple logistic regression model. Effects of risk-factor combinations were typically summative. Exceptions to that pattern were in the direction of less-than-summative effects when one of the combined risk factors was associated with generally high or low rates of smoking (e.g., drug abuse/dependence, age ≥65). CART modeling identified subpopulation risk profiles wherein smoking prevalence varied from a low of 11% to a high of 74% depending on particular risk factor combinations. Being a college graduate was the strongest independent predictor of smoking status, classifying 30% of the adult population. Conclusions These results offer strong evidence that the effects associated with common risk factors for cigarette smoking are independent, cumulative, and generally summative. The results also offer potentially useful insights into national population risk profiles around which U.S. tobacco policies can be developed or refined. PMID:26902875
Green lumber grade yields from factory grade logs of three oak species
Daniel A. Yaussy
1986-01-01
Multivariate regression models were developed to predict green board foot yields for the seven common factory lumber grades processed from white, black, and chestnut oak factory grade logs. These models use the standard log measurements of grade, scaling diameter, log length, and proportion of scaling defect. Any combination of lumber grades (such as 1 Common and...
Building "e-rater"® Scoring Models Using Machine Learning Methods. Research Report. ETS RR-16-04
ERIC Educational Resources Information Center
Chen, Jing; Fife, James H.; Bejar, Isaac I.; Rupp, André A.
2016-01-01
The "e-rater"® automated scoring engine used at Educational Testing Service (ETS) scores the writing quality of essays. In the current practice, e-rater scores are generated via a multiple linear regression (MLR) model as a linear combination of various features evaluated for each essay and human scores as the outcome variable. This…
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
NASA Astrophysics Data System (ADS)
de Santana, Felipe Bachion; de Souza, André Marcelo; Poppi, Ronei Jesus
2018-02-01
This study evaluates the use of visible and near infrared spectroscopy (Vis-NIRS) combined with multivariate regression based on random forest to quantify some quality soil parameters. The parameters analyzed were soil cation exchange capacity (CEC), sum of exchange bases (SB), organic matter (OM), clay and sand present in the soils of several regions of Brazil. Current methods for evaluating these parameters are laborious, timely and require various wet analytical methods that are not adequate for use in precision agriculture, where faster and automatic responses are required. The random forest regression models were statistically better than PLS regression models for CEC, OM, clay and sand, demonstrating resistance to overfitting, attenuating the effect of outlier samples and indicating the most important variables for the model. The methodology demonstrates the potential of the Vis-NIR as an alternative for determination of CEC, SB, OM, sand and clay, making possible to develop a fast and automatic analytical procedure.
NASA Astrophysics Data System (ADS)
Gholizadeh, H.; Robeson, S. M.
2015-12-01
Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.
Biomarker combinations for diagnosis and prognosis in multicenter studies: Principles and methods.
Meisner, Allison; Parikh, Chirag R; Kerr, Kathleen F
2017-01-01
Many investigators are interested in combining biomarkers to predict a binary outcome or detect underlying disease. This endeavor is complicated by the fact that many biomarker studies involve data from multiple centers. Depending upon the relationship between center, the biomarkers, and the target of prediction, care must be taken when constructing and evaluating combinations of biomarkers. We introduce a taxonomy to describe the role of center and consider how a biomarker combination should be constructed and evaluated. We show that ignoring center, which is frequently done by clinical researchers, is often not appropriate. The limited statistical literature proposes using random intercept logistic regression models, an approach that we demonstrate is generally inadequate and may be misleading. We instead propose using fixed intercept logistic regression, which appropriately accounts for center without relying on untenable assumptions. After constructing the biomarker combination, we recommend using performance measures that account for the multicenter nature of the data, namely the center-adjusted area under the receiver operating characteristic curve. We apply these methods to data from a multicenter study of acute kidney injury after cardiac surgery. Appropriately accounting for center, both in construction and evaluation, may increase the likelihood of identifying clinically useful biomarker combinations.
Using Robust Standard Errors to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan T.
2012-01-01
Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…
Yoo, Kwangsun; Rosenberg, Monica D; Hsu, Wei-Ting; Zhang, Sheng; Li, Chiang-Shan R; Scheinost, Dustin; Constable, R Todd; Chun, Marvin M
2018-02-15
Connectome-based predictive modeling (CPM; Finn et al., 2015; Shen et al., 2017) was recently developed to predict individual differences in traits and behaviors, including fluid intelligence (Finn et al., 2015) and sustained attention (Rosenberg et al., 2016a), from functional brain connectivity (FC) measured with fMRI. Here, using the CPM framework, we compared the predictive power of three different measures of FC (Pearson's correlation, accordance, and discordance) and two different prediction algorithms (linear and partial least square [PLS] regression) for attention function. Accordance and discordance are recently proposed FC measures that respectively track in-phase synchronization and out-of-phase anti-correlation (Meskaldji et al., 2015). We defined connectome-based models using task-based or resting-state FC data, and tested the effects of (1) functional connectivity measure and (2) feature-selection/prediction algorithm on individualized attention predictions. Models were internally validated in a training dataset using leave-one-subject-out cross-validation, and externally validated with three independent datasets. The training dataset included fMRI data collected while participants performed a sustained attention task and rested (N = 25; Rosenberg et al., 2016a). The validation datasets included: 1) data collected during performance of a stop-signal task and at rest (N = 83, including 19 participants who were administered methylphenidate prior to scanning; Farr et al., 2014a; Rosenberg et al., 2016b), 2) data collected during Attention Network Task performance and rest (N = 41, Rosenberg et al., in press), and 3) resting-state data and ADHD symptom severity from the ADHD-200 Consortium (N = 113; Rosenberg et al., 2016a). Models defined using all combinations of functional connectivity measure (Pearson's correlation, accordance, and discordance) and prediction algorithm (linear and PLS regression) predicted attentional abilities, with correlations between predicted and observed measures of attention as high as 0.9 for internal validation, and 0.6 for external validation (all p's < 0.05). Models trained on task data outperformed models trained on rest data. Pearson's correlation and accordance features generally showed a small numerical advantage over discordance features, while PLS regression models were usually better than linear regression models. Overall, in addition to correlation features combined with linear models (Rosenberg et al., 2016a), it is useful to consider accordance features and PLS regression for CPM. Copyright © 2017 Elsevier Inc. All rights reserved.
Bouwhuis, Stef; Geuskens, Goedele A; Boot, Cécile R L; Bongers, Paulien M; van der Beek, Allard J
2017-08-01
To construct prediction models for transitions to combination multiple job holding (MJH) (multiple jobs as an employee) and hybrid MJH (being an employee and self-employed), among employees aged 45-64. A total of 5187 employees in the Netherlands completed online questionnaires annually between 2010 and 2013. We applied logistic regression analyses with a backward elimination strategy to construct prediction models. Transitions to combination MJH and hybrid MJH were best predicted by a combination of factors including: demographics, health and mastery, work characteristics, work history, skills and knowledge, social factors, and financial factors. Not having a permanent contract and a poor household financial situation predicted both transitions. Some predictors only predicted combination MJH, e.g., working part-time, or hybrid MJH, e.g., work-home interference. A wide variety of factors predict combination MJH and/or hybrid MJH. The prediction model approach allowed for the identification of predictors that have not been previously studied. © 2017 Wiley Periodicals, Inc.
Lu, Chi-Jie; Chang, Chi-Chang
2014-01-01
Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting.
2014-01-01
Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting. PMID:25045738
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping
2004-12-01
This work evaluates the feasibility of Fourier transform near infrared (FT-NIR) spectrometry for rapid determining the total soluble solids content and acidity of apple fruit. Intact apple fruit were measured by reflectance FT-NIR in 800-2500 nm range. FT-NIR models were developed based on partial least square (PLS) regression and principal component regress (PCR) with respect to the reflectance and its first derivative, the logarithms of the reflectance reciprocal and its second derivative. The above regression models, related the FT-NIR spectra to soluble solids content (SSC), titratable acidity (TA) and available acidity (pH). The best combination, based on the prediction results, was PLS models with respect to the logarithms of the reflectance reciprocal. Predictions with PLS models resulted standard errors of prediction (SEP) of 0.455, 0.044 and 0.068, and correlation coefficients of 0.968, 0.728 and 0.831 for SSC, TA and pH, respectively. It was concluded that by using the FT-NIR spectrometry measurement system, in the appropriate spectral range, it is possible to nondestructively assess the maturity factors of apple fruit.
Arevalillo, Jorge M; Sztein, Marcelo B; Kotloff, Karen L; Levine, Myron M; Simon, Jakub K
2017-10-01
Immunologic correlates of protection are important in vaccine development because they give insight into mechanisms of protection, assist in the identification of promising vaccine candidates, and serve as endpoints in bridging clinical vaccine studies. Our goal is the development of a methodology to identify immunologic correlates of protection using the Shigella challenge as a model. The proposed methodology utilizes the Random Forests (RF) machine learning algorithm as well as Classification and Regression Trees (CART) to detect immune markers that predict protection, identify interactions between variables, and define optimal cutoffs. Logistic regression modeling is applied to estimate the probability of protection and the confidence interval (CI) for such a probability is computed by bootstrapping the logistic regression models. The results demonstrate that the combination of Classification and Regression Trees and Random Forests complements the standard logistic regression and uncovers subtle immune interactions. Specific levels of immunoglobulin IgG antibody in blood on the day of challenge predicted protection in 75% (95% CI 67-86). Of those subjects that did not have blood IgG at or above a defined threshold, 100% were protected if they had IgA antibody secreting cells above a defined threshold. Comparison with the results obtained by applying only logistic regression modeling with standard Akaike Information Criterion for model selection shows the usefulness of the proposed method. Given the complexity of the immune system, the use of machine learning methods may enhance traditional statistical approaches. When applied together, they offer a novel way to quantify important immune correlates of protection that may help the development of vaccines. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Leroux, Romain; Chatellier, Ludovic; David, Laurent
2018-01-01
This article is devoted to the estimation of time-resolved particle image velocimetry (TR-PIV) flow fields using a time-resolved point measurements of a voltage signal obtained by hot-film anemometry. A multiple linear regression model is first defined to map the TR-PIV flow fields onto the voltage signal. Due to the high temporal resolution of the signal acquired by the hot-film sensor, the estimates of the TR-PIV flow fields are obtained with a multiple linear regression method called orthonormalized partial least squares regression (OPLSR). Subsequently, this model is incorporated as the observation equation in an ensemble Kalman filter (EnKF) applied on a proper orthogonal decomposition reduced-order model to stabilize it while reducing the effects of the hot-film sensor noise. This method is assessed for the reconstruction of the flow around a NACA0012 airfoil at a Reynolds number of 1000 and an angle of attack of {20}°. Comparisons with multi-time delay-modified linear stochastic estimation show that both the OPLSR and EnKF combined with OPLSR are more accurate as they produce a much lower relative estimation error, and provide a faithful reconstruction of the time evolution of the velocity flow fields.
Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach
NASA Astrophysics Data System (ADS)
Bagirov, Adil M.; Mahmood, Arshad; Barton, Andrew
2017-05-01
This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889-2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations.
Li, Min; Zhang, Lu; Yao, Xiaolong; Jiang, Xingyu
2017-01-01
The emerging membrane introduction mass spectrometry technique has been successfully used to detect benzene, toluene, ethyl benzene and xylene (BTEX), while overlapped spectra have unfortunately hindered its further application to the analysis of mixtures. Multivariate calibration, an efficient method to analyze mixtures, has been widely applied. In this paper, we compared univariate and multivariate analyses for quantification of the individual components of mixture samples. The results showed that the univariate analysis creates poor models with regression coefficients of 0.912, 0.867, 0.440 and 0.351 for BTEX, respectively. For multivariate analysis, a comparison to the partial-least squares (PLS) model shows that the orthogonal partial-least squares (OPLS) regression exhibits an optimal performance with regression coefficients of 0.995, 0.999, 0.980 and 0.976, favorable calibration parameters (RMSEC and RMSECV) and a favorable validation parameter (RMSEP). Furthermore, the OPLS exhibits a good recovery of 73.86 - 122.20% and relative standard deviation (RSD) of the repeatability of 1.14 - 4.87%. Thus, MIMS coupled with the OPLS regression provides an optimal approach for a quantitative BTEX mixture analysis in monitoring and predicting water pollution.
Prediction of Patient-Controlled Analgesic Consumption: A Multimodel Regression Tree Approach.
Hu, Yuh-Jyh; Ku, Tien-Hsiung; Yang, Yu-Hung; Shen, Jia-Ying
2018-01-01
Several factors contribute to individual variability in postoperative pain, therefore, individuals consume postoperative analgesics at different rates. Although many statistical studies have analyzed postoperative pain and analgesic consumption, most have identified only the correlation and have not subjected the statistical model to further tests in order to evaluate its predictive accuracy. In this study involving 3052 patients, a multistrategy computational approach was developed for analgesic consumption prediction. This approach uses data on patient-controlled analgesia demand behavior over time and combines clustering, classification, and regression to mitigate the limitations of current statistical models. Cross-validation results indicated that the proposed approach significantly outperforms various existing regression methods. Moreover, a comparison between the predictions by anesthesiologists and medical specialists and those of the computational approach for an independent test data set of 60 patients further evidenced the superiority of the computational approach in predicting analgesic consumption because it produced markedly lower root mean squared errors.
2011-01-01
Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG) signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2) values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS) was shown to have high isometric torque estimation accuracy combined with very short training times. PMID:21943179
NASA Astrophysics Data System (ADS)
Houborg, Rasmus; McCabe, Matthew F.
2018-01-01
With an increasing volume and dimensionality of Earth observation data, enhanced integration of machine-learning methodologies is needed to effectively analyze and utilize these information rich datasets. In machine-learning, a training dataset is required to establish explicit associations between a suite of explanatory 'predictor' variables and the target property. The specifics of this learning process can significantly influence model validity and portability, with a higher generalization level expected with an increasing number of observable conditions being reflected in the training dataset. Here we propose a hybrid training approach for leaf area index (LAI) estimation, which harnesses synergistic attributes of scattered in-situ measurements and systematically distributed physically based model inversion results to enhance the information content and spatial representativeness of the training data. To do this, a complimentary training dataset of independent LAI was derived from a regularized model inversion of RapidEye surface reflectances and subsequently used to guide the development of LAI regression models via Cubist and random forests (RF) decision tree methods. The application of the hybrid training approach to a broad set of Landsat 8 vegetation index (VI) predictor variables resulted in significantly improved LAI prediction accuracies and spatial consistencies, relative to results relying on in-situ measurements alone for model training. In comparing the prediction capacity and portability of the two machine-learning algorithms, a pair of relatively simple multi-variate regression models established by Cubist performed best, with an overall relative mean absolute deviation (rMAD) of ∼11%, determined based on a stringent scene-specific cross-validation approach. In comparison, the portability of RF regression models was less effective (i.e., an overall rMAD of ∼15%), which was attributed partly to model saturation at high LAI in association with inherent extrapolation and transferability limitations. Explanatory VIs formed from bands in the near-infrared (NIR) and shortwave infrared domains (e.g., NDWI) were associated with the highest predictive ability, whereas Cubist models relying entirely on VIs based on NIR and red band combinations (e.g., NDVI) were associated with comparatively high uncertainties (i.e., rMAD ∼ 21%). The most transferable and best performing models were based on combinations of several predictor variables, which included both NDWI- and NDVI-like variables. In this process, prior screening of input VIs based on an assessment of variable relevance served as an effective mechanism for optimizing prediction accuracies from both Cubist and RF. While this study demonstrated benefit in combining data mining operations with physically based constraints via a hybrid training approach, the concept of transferability and portability warrants further investigations in order to realize the full potential of emerging machine-learning techniques for regression purposes.
2013-01-01
Methods for analysis of network dynamics have seen great progress in the past decade. This article shows how Dynamic Network Logistic Regression techniques (a special case of the Temporal Exponential Random Graph Models) can be used to implement decision theoretic models for network dynamics in a panel data context. We also provide practical heuristics for model building and assessment. We illustrate the power of these techniques by applying them to a dynamic blog network sampled during the 2004 US presidential election cycle. This is a particularly interesting case because it marks the debut of Internet-based media such as blogs and social networking web sites as institutionally recognized features of the American political landscape. Using a longitudinal sample of all Democratic National Convention/Republican National Convention–designated blog citation networks, we are able to test the influence of various strategic, institutional, and balance-theoretic mechanisms as well as exogenous factors such as seasonality and political events on the propensity of blogs to cite one another over time. Using a combination of deviance-based model selection criteria and simulation-based model adequacy tests, we identify the combination of processes that best characterizes the choice behavior of the contending blogs. PMID:24143060
Analyzing industrial energy use through ordinary least squares regression models
NASA Astrophysics Data System (ADS)
Golden, Allyson Katherine
Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and production behavior, and identify opportunities for energy and cost savings. This thesis study also utilizes change-point and degree-day baseline energy models to disaggregate facility annual energy consumption into separate industrial end-user categories. The baseline energy model provides a suitable and economical alternative to sub-metering individual manufacturing equipment. One case study describes the conjoined use of baseline energy models and facility information gathered during a one-day onsite visit to perform an end-point energy analysis of an injection molding facility conducted by the Alabama Industrial Assessment Center. Applying baseline regression model results to the end-point energy analysis allowed the AIAC to better approximate the annual energy consumption of the facility's HVAC system.
Evaluation of the Carrying Capacity of Rectangular Steel-Concrete Columns
NASA Astrophysics Data System (ADS)
Vatulia, Glib; Rezunenko, Maryna; Petrenko, Dmytro; Rezunenko, Sergii
2018-06-01
Experimental studies of rectangular steel-concrete columns under centric compression with random eccentricity were conducted. The stress-strain state and the carrying capacity exhaustion have been assessed. The regression dependence is proposed to determine the maximum carrying capacity of such columns. The mathematical model takes into account the combined influence of the physical and geometric characteristics of the columns, such as their length, crosssectional area, casing thickness, prism strength of concrete, yield strength of steel, modulus of elasticity of both steel and concrete. The correspondence of the obtained model to the experimental data, as well as the significance of the regression parameters are confirmed by the Fisher and Student criteria.
NASA Technical Reports Server (NTRS)
Smith, Timothy D.; Steffen, Christopher J., Jr.; Yungster, Shaye; Keller, Dennis J.
1998-01-01
The all rocket mode of operation is shown to be a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. An axisymmetric RBCC engine was used to determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and multiple linear regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inlet diameter ratio. A perfect gas computational fluid dynamics analysis, using both the Spalart-Allmaras and k-omega turbulence models, was performed with the NPARC code to obtain values of vacuum specific impulse. Results from the multiple linear regression analysis showed that for both the full flow and gas generator configurations increasing mixer-ejector area ratio and rocket area ratio increase performance, while increasing mixer-ejector inlet area ratio and mixer-ejector length-to-diameter ratio decrease performance. Increasing injected secondary flow increased performance for the gas generator analysis, but was not statistically significant for the full flow analysis. Chamber pressure was found to be not statistically significant.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Prakash, Indra; Tien Bui, Dieu
2018-02-01
A hybrid machine learning approach of Random Subspace (RSS) and Classification And Regression Trees (CART) is proposed to develop a model named RSSCART for spatial prediction of landslides. This model is a combination of the RSS method which is known as an efficient ensemble technique and the CART which is a state of the art classifier. The Luc Yen district of Yen Bai province, a prominent landslide prone area of Viet Nam, was selected for the model development. Performance of the RSSCART model was evaluated through the Receiver Operating Characteristic (ROC) curve, statistical analysis methods, and the Chi Square test. Results were compared with other benchmark landslide models namely Support Vector Machines (SVM), single CART, Naïve Bayes Trees (NBT), and Logistic Regression (LR). In the development of model, ten important landslide affecting factors related with geomorphology, geology and geo-environment were considered namely slope angles, elevation, slope aspect, curvature, lithology, distance to faults, distance to rivers, distance to roads, and rainfall. Performance of the RSSCART model (AUC = 0.841) is the best compared with other popular landslide models namely SVM (0.835), single CART (0.822), NBT (0.821), and LR (0.723). These results indicate that performance of the RSSCART is a promising method for spatial landslide prediction.
Korek, Michal; Johansson, Christer; Svensson, Nina; Lind, Tomas; Beelen, Rob; Hoek, Gerard; Pershagen, Göran; Bellander, Tom
2017-01-01
Both dispersion modeling (DM) and land-use regression modeling (LUR) are often used for assessment of long-term air pollution exposure in epidemiological studies, but seldom in combination. We developed a hybrid DM–LUR model using 93 biweekly observations of NOx at 31 sites in greater Stockholm (Sweden). The DM was based on spatially resolved topographic, physiographic and emission data, and hourly meteorological data from a diagnostic wind model. Other data were from land use, meteorology and routine monitoring of NOx. We built a linear regression model for NOx, using a stepwise forward selection of covariates. The resulting model predicted observed NOx (R2=0.89) better than the DM without covariates (R2=0.68, P-interaction <0.001) and with minimal apparent bias. The model included (in descending order of importance) DM, traffic intensity on the nearest street, population (number of inhabitants) within 100 m radius, global radiation (direct sunlight plus diffuse or scattered light) and urban contribution to NOx levels (routine urban NOx, less routine rural NOx). Our results indicate that there is a potential for improving estimates of air pollutant concentrations based on DM, by incorporating further spatial characteristics of the immediate surroundings, possibly accounting for imperfections in the emission data. PMID:27485990
Korek, Michal; Johansson, Christer; Svensson, Nina; Lind, Tomas; Beelen, Rob; Hoek, Gerard; Pershagen, Göran; Bellander, Tom
2017-11-01
Both dispersion modeling (DM) and land-use regression modeling (LUR) are often used for assessment of long-term air pollution exposure in epidemiological studies, but seldom in combination. We developed a hybrid DM-LUR model using 93 biweekly observations of NO x at 31 sites in greater Stockholm (Sweden). The DM was based on spatially resolved topographic, physiographic and emission data, and hourly meteorological data from a diagnostic wind model. Other data were from land use, meteorology and routine monitoring of NO x . We built a linear regression model for NO x , using a stepwise forward selection of covariates. The resulting model predicted observed NO x (R 2 =0.89) better than the DM without covariates (R 2 =0.68, P-interaction <0.001) and with minimal apparent bias. The model included (in descending order of importance) DM, traffic intensity on the nearest street, population (number of inhabitants) within 100 m radius, global radiation (direct sunlight plus diffuse or scattered light) and urban contribution to NO x levels (routine urban NO x , less routine rural NO x ). Our results indicate that there is a potential for improving estimates of air pollutant concentrations based on DM, by incorporating further spatial characteristics of the immediate surroundings, possibly accounting for imperfections in the emission data.
Valid Statistical Analysis for Logistic Regression with Multiple Sources
NASA Astrophysics Data System (ADS)
Fienberg, Stephen E.; Nardi, Yuval; Slavković, Aleksandra B.
Considerable effort has gone into understanding issues of privacy protection of individual information in single databases, and various solutions have been proposed depending on the nature of the data, the ways in which the database will be used and the precise nature of the privacy protection being offered. Once data are merged across sources, however, the nature of the problem becomes far more complex and a number of privacy issues arise for the linked individual files that go well beyond those that are considered with regard to the data within individual sources. In the paper, we propose an approach that gives full statistical analysis on the combined database without actually combining it. We focus mainly on logistic regression, but the method and tools described may be applied essentially to other statistical models as well.
NASA Astrophysics Data System (ADS)
Anand, Jasdeep S.; Monks, Paul S.
2017-07-01
Land use regression (LUR) models have been used in epidemiology to determine the fine-scale spatial variation in air pollutants such as nitrogen dioxide (NO2) in cities and larger regions. However, they are often limited in their temporal resolution, which may potentially be rectified by employing the synoptic coverage provided by satellite measurements. In this work a mixed-effects LUR model is developed to model daily surface NO2 concentrations over the Hong Kong SAR during the period 2005-2015. In situ measurements from the Hong Kong Air Quality Monitoring Network, along with tropospheric vertical column density (VCD) data from the OMI, GOME-2A, and SCIAMACHY satellite instruments were combined with fine-scale land use parameters to provide the spatiotemporal information necessary to predict daily surface concentrations. Cross-validation with the in situ data shows that the mixed-effects LUR model using OMI data has a high predictive power (adj. R2 = 0. 84), especially when compared with surface concentrations derived using the MACC-II reanalysis model dataset (adj. R2 = 0. 11). Time series analysis shows no statistically significant trend in NO2 concentrations during 2005-2015, despite a reported decline in NOx emissions. This study demonstrates the utility in combining satellite data with LUR models to derive daily maps of ambient surface NO2 for use in exposure studies.
Rispo, Antonio; Imperatore, Nicola; Testa, Anna; Bucci, Luigi; Luglio, Gaetano; De Palma, Giovanni Domenico; Rea, Matilde; Nardone, Olga Maria; Caporaso, Nicola; Castiglione, Fabiana
2018-03-08
In the management of Crohn's Disease (CD) patients, having a simple score combining clinical, endoscopic and imaging features to predict the risk of surgery could help to tailor treatment more effectively. AIMS: to prospectively evaluate the one-year risk factors for surgery in refractory/severe CD and to generate a risk matrix for predicting the probability of surgery at one year. CD patients needing a disease re-assessment at our tertiary IBD centre underwent clinical, laboratory, endoscopy and bowel sonography (BS) examinations within one week. The optimal cut-off values in predicting surgery were identified using ROC curves for Simple Endoscopic Score for CD (SES-CD), bowel wall thickness (BWT) at BS, and small bowel CD extension at BS. Binary logistic regression and Cox's regression were then carried out. Finally, the probabilities of surgery were calculated for selected baseline levels of covariates and results were arranged in a prediction matrix. Of 100 CD patients, 30 underwent surgery within one year. SES-CD©9 (OR 15.3; p<0.001), BWT©7 mm (OR 15.8; p<0.001), small bowel CD extension at BS©33 cm (OR 8.23; p<0.001) and stricturing/penetrating behavior (OR 4.3; p<0.001) were the only independent factors predictive of surgery at one-year based on binary logistic and Cox's regressions. Our matrix model combined these risk factors and the probability of surgery ranged from 0.48% to 87.5% (sixteen combinations). Our risk matrix combining clinical, endoscopic and ultrasonographic findings can accurately predict the one-year risk of surgery in patients with severe/refractory CD requiring a disease re-evaluation. This tool could be of value in clinical practice, serving as the basis for a tailored management of CD patients.
Evaluating the Locational Attributes of Education Management Organizations (EMOs)
ERIC Educational Resources Information Center
Gulosino, Charisse; Miron, Gary
2017-01-01
This study uses logistic and multinomial logistic regression models to analyze neighborhood factors affecting EMO (Education Management Organization)-operated schools' locational attributes (using census tracts) in 41 states for the 2014-2015 school year. Our research combines market-based school reform, institutional theory, and resource…
Suzuki, Hideaki; Tabata, Takahisa; Koizumi, Hiroki; Hohchi, Nobusuke; Takeuchi, Shoko; Kitamura, Takuro; Fujino, Yoshihisa; Ohbuchi, Toyoaki
2014-12-01
This study aimed to create a multiple regression model for predicting hearing outcomes of idiopathic sudden sensorineural hearing loss (ISSNHL). The participants were 205 consecutive patients (205 ears) with ISSNHL (hearing level ≥ 40 dB, interval between onset and treatment ≤ 30 days). They received systemic steroid administration combined with intratympanic steroid injection. Data were examined by simple and multiple regression analyses. Three hearing indices (percentage hearing improvement, hearing gain, and posttreatment hearing level [HLpost]) and 7 prognostic factors (age, days from onset to treatment, initial hearing level, initial hearing level at low frequencies, initial hearing level at high frequencies, presence of vertigo, and contralateral hearing level) were included in the multiple regression analysis as dependent and explanatory variables, respectively. In the simple regression analysis, the percentage hearing improvement, hearing gain, and HLpost showed significant correlation with 2, 5, and 6 of the 7 prognostic factors, respectively. The multiple correlation coefficients were 0.396, 0.503, and 0.714 for the percentage hearing improvement, hearing gain, and HLpost, respectively. Predicted values of HLpost calculated by the multiple regression equation were reliable with 70% probability with a 40-dB-width prediction interval. Prediction of HLpost by the multiple regression model may be useful to estimate the hearing prognosis of ISSNHL. © The Author(s) 2014.
Learning accurate and interpretable models based on regularized random forests regression
2014-01-01
Background Many biology related research works combine data from multiple sources in an effort to understand the underlying problems. It is important to find and interpret the most important information from these sources. Thus it will be beneficial to have an effective algorithm that can simultaneously extract decision rules and select critical features for good interpretation while preserving the prediction performance. Methods In this study, we focus on regression problems for biological data where target outcomes are continuous. In general, models constructed from linear regression approaches are relatively easy to interpret. However, many practical biological applications are nonlinear in essence where we can hardly find a direct linear relationship between input and output. Nonlinear regression techniques can reveal nonlinear relationship of data, but are generally hard for human to interpret. We propose a rule based regression algorithm that uses 1-norm regularized random forests. The proposed approach simultaneously extracts a small number of rules from generated random forests and eliminates unimportant features. Results We tested the approach on some biological data sets. The proposed approach is able to construct a significantly smaller set of regression rules using a subset of attributes while achieving prediction performance comparable to that of random forests regression. Conclusion It demonstrates high potential in aiding prediction and interpretation of nonlinear relationships of the subject being studied. PMID:25350120
Technical note: Combining quantile forecasts and predictive distributions of streamflows
NASA Astrophysics Data System (ADS)
Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano
2017-11-01
The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.
Nonlinear-regression groundwater flow modeling of a deep regional aquifer system
Cooley, Richard L.; Konikow, Leonard F.; Naff, Richard L.
1986-01-01
A nonlinear regression groundwater flow model, based on a Galerkin finite-element discretization, was used to analyze steady state two-dimensional groundwater flow in the areally extensive Madison aquifer in a 75,000 mi2 area of the Northern Great Plains. Regression parameters estimated include intrinsic permeabilities of the main aquifer and separate lineament zones, discharges from eight major springs surrounding the Black Hills, and specified heads on the model boundaries. Aquifer thickness and temperature variations were included as specified functions. The regression model was applied using sequential F testing so that the fewest number and simplest zonation of intrinsic permeabilities, combined with the simplest overall model, were evaluated initially; additional complexities (such as subdivisions of zones and variations in temperature and thickness) were added in stages to evaluate the subsequent degree of improvement in the model results. It was found that only the eight major springs, a single main aquifer intrinsic permeability, two separate lineament intrinsic permeabilities of much smaller values, and temperature variations are warranted by the observed data (hydraulic heads and prior information on some parameters) for inclusion in a model that attempts to explain significant controls on groundwater flow. Addition of thickness variations did not significantly improve model results; however, thickness variations were included in the final model because they are fairly well defined. Effects on the observed head distribution from other features, such as vertical leakage and regional variations in intrinsic permeability, apparently were overshadowed by measurement errors in the observed heads. Estimates of the parameters correspond well to estimates obtained from other independent sources.
Nonlinear-Regression Groundwater Flow Modeling of a Deep Regional Aquifer System
NASA Astrophysics Data System (ADS)
Cooley, Richard L.; Konikow, Leonard F.; Naff, Richard L.
1986-12-01
A nonlinear regression groundwater flow model, based on a Galerkin finite-element discretization, was used to analyze steady state two-dimensional groundwater flow in the areally extensive Madison aquifer in a 75,000 mi2 area of the Northern Great Plains. Regression parameters estimated include intrinsic permeabilities of the main aquifer and separate lineament zones, discharges from eight major springs surrounding the Black Hills, and specified heads on the model boundaries. Aquifer thickness and temperature variations were included as specified functions. The regression model was applied using sequential F testing so that the fewest number and simplest zonation of intrinsic permeabilities, combined with the simplest overall model, were evaluated initially; additional complexities (such as subdivisions of zones and variations in temperature and thickness) were added in stages to evaluate the subsequent degree of improvement in the model results. It was found that only the eight major springs, a single main aquifer intrinsic permeability, two separate lineament intrinsic permeabilities of much smaller values, and temperature variations are warranted by the observed data (hydraulic heads and prior information on some parameters) for inclusion in a model that attempts to explain significant controls on groundwater flow. Addition of thickness variations did not significantly improve model results; however, thickness variations were included in the final model because they are fairly well defined. Effects on the observed head distribution from other features, such as vertical leakage and regional variations in intrinsic permeability, apparently were overshadowed by measurement errors in the observed heads. Estimates of the parameters correspond well to estimates obtained from other independent sources.
Fang, Xingang; Bagui, Sikha; Bagui, Subhash
2017-08-01
The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhou, Jinzhe; Zhou, Yanbing; Cao, Shougen; Li, Shikuan; Wang, Hao; Niu, Zhaojian; Chen, Dong; Wang, Dongsheng; Lv, Liang; Zhang, Jian; Li, Yu; Jiao, Xuelong; Tan, Xiaojie; Zhang, Jianli; Wang, Haibo; Zhang, Bingyuan; Lu, Yun; Sun, Zhenqing
2016-01-01
Reporting of surgical complications is common, but few provide information about the severity and estimate risk factors of complications. If have, but lack of specificity. We retrospectively analyzed data on 2795 gastric cancer patients underwent surgical procedure at the Affiliated Hospital of Qingdao University between June 2007 and June 2012, established multivariate logistic regression model to predictive risk factors related to the postoperative complications according to the Clavien-Dindo classification system. Twenty-four out of 86 variables were identified statistically significant in univariate logistic regression analysis, 11 significant variables entered multivariate analysis were employed to produce the risk model. Liver cirrhosis, diabetes mellitus, Child classification, invasion of neighboring organs, combined resection, introperative transfusion, Billroth II anastomosis of reconstruction, malnutrition, surgical volume of surgeons, operating time and age were independent risk factors for postoperative complications after gastrectomy. Based on logistic regression equation, p=Exp∑BiXi / (1+Exp∑BiXi), multivariate logistic regression predictive model that calculated the risk of postoperative morbidity was developed, p = 1/(1 + e((4.810-1.287X1-0.504X2-0.500X3-0.474X4-0.405X5-0.318X6-0.316X7-0.305X8-0.278X9-0.255X10-0.138X11))). The accuracy, sensitivity and specificity of the model to predict the postoperative complications were 86.7%, 76.2% and 88.6%, respectively. This risk model based on Clavien-Dindo grading severity of complications system and logistic regression analysis can predict severe morbidity specific to an individual patient's risk factors, estimate patients' risks and benefits of gastric surgery as an accurate decision-making tool and may serve as a template for the development of risk models for other surgical groups.
Logistic regression modeling to assess groundwater vulnerability to contamination in Hawaii, USA
NASA Astrophysics Data System (ADS)
Mair, Alan; El-Kadi, Aly I.
2013-10-01
Capture zone analysis combined with a subjective susceptibility index is currently used in Hawaii to assess vulnerability to contamination of drinking water sources derived from groundwater. In this study, we developed an alternative objective approach that combines well capture zones with multiple-variable logistic regression (LR) modeling and applied it to the highly-utilized Pearl Harbor and Honolulu aquifers on the island of Oahu, Hawaii. Input for the LR models utilized explanatory variables based on hydrogeology, land use, and well geometry/location. A suite of 11 target contaminants detected in the region, including elevated nitrate (> 1 mg/L), four chlorinated solvents, four agricultural fumigants, and two pesticides, was used to develop the models. We then tested the ability of the new approach to accurately separate groups of wells with low and high vulnerability, and the suitability of nitrate as an indicator of other types of contamination. Our results produced contaminant-specific LR models that accurately identified groups of wells with the lowest/highest reported detections and the lowest/highest nitrate concentrations. Current and former agricultural land uses were identified as significant explanatory variables for eight of the 11 target contaminants, while elevated nitrate was a significant variable for five contaminants. The utility of the combined approach is contingent on the availability of hydrologic and chemical monitoring data for calibrating groundwater and LR models. Application of the approach using a reference site with sufficient data could help identify key variables in areas with similar hydrogeology and land use but limited data. In addition, elevated nitrate may also be a suitable indicator of groundwater contamination in areas with limited data. The objective LR modeling approach developed in this study is flexible enough to address a wide range of contaminants and represents a suitable addition to the current subjective approach.
The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models
1988-07-27
auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the
Ebtehaj, Isa; Bonakdari, Hossein
2016-01-01
Sediment transport without deposition is an essential consideration in the optimum design of sewer pipes. In this study, a novel method based on a combination of support vector regression (SVR) and the firefly algorithm (FFA) is proposed to predict the minimum velocity required to avoid sediment settling in pipe channels, which is expressed as the densimetric Froude number (Fr). The efficiency of support vector machine (SVM) models depends on the suitable selection of SVM parameters. In this particular study, FFA is used by determining these SVM parameters. The actual effective parameters on Fr calculation are generally identified by employing dimensional analysis. The different dimensionless variables along with the models are introduced. The best performance is attributed to the model that employs the sediment volumetric concentration (C(V)), ratio of relative median diameter of particles to hydraulic radius (d/R), dimensionless particle number (D(gr)) and overall sediment friction factor (λ(s)) parameters to estimate Fr. The performance of the SVR-FFA model is compared with genetic programming, artificial neural network and existing regression-based equations. The results indicate the superior performance of SVR-FFA (mean absolute percentage error = 2.123%; root mean square error =0.116) compared with other methods.
NASA Astrophysics Data System (ADS)
Ying, Yibin; Liu, Yande; Fu, Xiaping; Lu, Huishan
2005-11-01
The artificial neural networks (ANNs) have been used successfully in applications such as pattern recognition, image processing, automation and control. However, majority of today's applications of ANNs is back-propagate feed-forward ANN (BP-ANN). In this paper, back-propagation artificial neural networks (BP-ANN) were applied for modeling soluble solid content (SSC) of intact pear from their Fourier transform near infrared (FT-NIR) spectra. One hundred and sixty-four pear samples were used to build the calibration models and evaluate the models predictive ability. The results are compared to the classical calibration approaches, i.e. principal component regression (PCR), partial least squares (PLS) and non-linear PLS (NPLS). The effects of the optimal methods of training parameters on the prediction model were also investigated. BP-ANN combine with principle component regression (PCR) resulted always better than the classical PCR, PLS and Weight-PLS methods, from the point of view of the predictive ability. Based on the results, it can be concluded that FT-NIR spectroscopy and BP-ANN models can be properly employed for rapid and nondestructive determination of fruit internal quality.
Liu, Fei; Feng, Lei; Lou, Bing-gan; Sun, Guang-ming; Wang, Lian-ping; He, Yong
2010-07-01
The combinational-stimulated bands were used to develop linear and nonlinear calibrations for the early detection of sclerotinia of oilseed rape (Brassica napus L.). Eighty healthy and 100 Sclerotinia leaf samples were scanned, and different preprocessing methods combined with successive projections algorithm (SPA) were applied to develop partial least squares (PLS) discriminant models, multiple linear regression (MLR) and least squares-support vector machine (LS-SVM) models. The results indicated that the optimal full-spectrum PLS model was achieved by direct orthogonal signal correction (DOSC), then De-trending and Raw spectra with correct recognition ratio of 100%, 95.7% and 95.7%, respectively. When using combinational-stimulated bands, the optimal linear models were SPA-MLR (DOSC) and SPA-PLS (DOSC) with correct recognition ratio of 100%. All SPA-LSSVM models using DOSC, De-trending and Raw spectra achieved perfect results with recognition of 100%. The overall results demonstrated that it was feasible to use combinational-stimulated bands for the early detection of Sclerotinia of oilseed rape, and DOSC-SPA was a powerful way for informative wavelength selection. This method supplied a new approach to the early detection and portable monitoring instrument of sclerotinia.
Bozeman, Erica N; He, Sara; Shafizadeh, Yalda; Selvaraj, Periasamy
2016-01-01
Immunotherapeutic approaches have emerged as promising strategies to treat various cancers, including breast cancer. A single approach, however, is unlikely to effectively combat the complex, immune evasive strategies found within the tumor microenvironment, thus novel, effective combination treatments must be explored. In this study, we investigated the efficacy of a combination therapy consisting of PD-L1 immune checkpoint blockade and whole cell vaccination in a HER-2 positive mouse model of breast cancer. We demonstrate that tumorigenicity is completely abrogated when adjuvanted with immune stimulatory molecules (ISMs) B7-1 and a cell-surface anchored (GPI) form of IL-12 or GM-CSF. Irradiated cellular vaccines expressing the combination of adjuvants B7-1 and GPI-IL-12 completely inhibited tumor formation which was correlative with robust HER-2 specific CTL activity. However, in a therapeutic setting, both cellular vaccination and PD-L1 blockade induced only 10-20% tumor regression when administered alone but resulted in 50% tumor regression as a combination therapy. This protection was significantly hindered following CD4 or CD8 depletion indicating the essential role played by cellular immunity. Collectively, these pre-clinical studies provide a strong rationale for further investigation into the efficacy of combination therapy with tumor cell vaccines adjuvanted with membrane-anchored ISMs along with PD-L1 blockade for the treatment of breast cancer.
Bozeman, Erica N; He, Sara; Shafizadeh, Yalda; Selvaraj, Periasamy
2016-01-01
Immunotherapeutic approaches have emerged as promising strategies to treat various cancers, including breast cancer. A single approach, however, is unlikely to effectively combat the complex, immune evasive strategies found within the tumor microenvironment, thus novel, effective combination treatments must be explored. In this study, we investigated the efficacy of a combination therapy consisting of PD-L1 immune checkpoint blockade and whole cell vaccination in a HER-2 positive mouse model of breast cancer. We demonstrate that tumorigenicity is completely abrogated when adjuvanted with immune stimulatory molecules (ISMs) B7-1 and a cell-surface anchored (GPI) form of IL-12 or GM-CSF. Irradiated cellular vaccines expressing the combination of adjuvants B7-1 and GPI-IL-12 completely inhibited tumor formation which was correlative with robust HER-2 specific CTL activity. However, in a therapeutic setting, both cellular vaccination and PD-L1 blockade induced only 10–20% tumor regression when administered alone but resulted in 50% tumor regression as a combination therapy. This protection was significantly hindered following CD4 or CD8 depletion indicating the essential role played by cellular immunity. Collectively, these pre-clinical studies provide a strong rationale for further investigation into the efficacy of combination therapy with tumor cell vaccines adjuvanted with membrane-anchored ISMs along with PD-L1 blockade for the treatment of breast cancer. PMID:26308597
Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data
García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio
2016-01-01
Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed. PMID:28787882
García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio
2016-01-28
Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC-MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc . Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC-MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.
Estimation of stature from sternum - Exploring the quadratic models.
Saraf, Ashish; Kanchan, Tanuj; Krishan, Kewal; Ateriya, Navneet; Setia, Puneet
2018-04-14
Identification of the dead is significant in examination of unknown, decomposed and mutilated human remains. Establishing the biological profile is the central issue in such a scenario, and stature estimation remains one of the important criteria in this regard. The present study was undertaken to estimate stature from different parts of the sternum. A sample of 100 sterna was obtained from individuals during the medicolegal autopsies. Length of the deceased and various measurements of the sternum were measured. Student's t-test was performed to find the sex differences in stature and sternal measurements included in the study. Correlation between stature and sternal measurements were analysed using Karl Pearson's correlation, and linear and quadratic regression models were derived. All the measurements were found to be significantly larger in males than females. Stature correlated best with the combined length of sternum, among males (R = 0.894), females (R = 0.859), and for the total sample (R = 0.891). The study showed that the models derived for stature estimation from combined length of sternum are likely to give the most accurate estimates of stature in forensic case work when compared to manubrium and mesosternum. Accuracy of stature estimation further increased with quadratic models derived for the mesosternum among males and combined length of sternum among males and females when compared to linear regression models. Future studies in different geographical locations and a larger sample size are proposed to confirm the study observations. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Berchialla, Paola; Scarinzi, Cecilia; Snidero, Silvia; Gregori, Dario
2016-08-01
Risk Assessment is the systematic study of decisions subject to uncertain consequences. An increasing interest has been focused on modeling techniques like Bayesian Networks since their capability of (1) combining in the probabilistic framework different type of evidence including both expert judgments and objective data; (2) overturning previous beliefs in the light of the new information being received and (3) making predictions even with incomplete data. In this work, we proposed a comparison among Bayesian Networks and other classical Quantitative Risk Assessment techniques such as Neural Networks, Classification Trees, Random Forests and Logistic Regression models. Hybrid approaches, combining both Classification Trees and Bayesian Networks, were also considered. Among Bayesian Networks, a clear distinction between purely data-driven approach and combination of expert knowledge with objective data is made. The aim of this paper consists in evaluating among this models which best can be applied, in the framework of Quantitative Risk Assessment, to assess the safety of children who are exposed to the risk of inhalation/insertion/aspiration of consumer products. The issue of preventing injuries in children is of paramount importance, in particular where product design is involved: quantifying the risk associated to product characteristics can be of great usefulness in addressing the product safety design regulation. Data of the European Registry of Foreign Bodies Injuries formed the starting evidence for risk assessment. Results showed that Bayesian Networks appeared to have both the ease of interpretability and accuracy in making prediction, even if simpler models like logistic regression still performed well. © The Author(s) 2013.
Risk Factors for Suicidal Ideation in People at Risk for Huntington's Disease.
Anderson, Karen E; Eberly, Shirley; Groves, Mark; Kayson, Elise; Marder, Karen; Young, Anne B; Shoulson, Ira
2016-12-15
Suicidal ideation (SI) and attempts are increased in Huntington's disease (HD), making risk factor assessment a priority. To determine whether, hopelessness, irritability, aggression, anxiety, CAG expansion status, depression, and motor signs/symptoms were associated with Suicidal Ideation (SI) in those at risk for HD. Behavioral and neurological data were collected from subjects in an observational study. Subject characteristics were calculated by CAG status and SI. Logistic regression models were adjusted for demographics. Separate logistic regressions were used to compare SI and non-SI subjects. A combined logistic regression model, including 4 pre-specified predictors, (hopelessness, irritability, aggression, anxiety) was used to assess the relationship of SI to these predictors. 801 subjects were assessed, 40 were classified as having SI, 6.3% of CAG mutation expansion carriers had SI, compared with 4.3% of non- CAG mutation expansion carriers (p = 0.2275). SI subjects had significantly increased depression (p < 0.0001), hopelessness (p < 0.0001), irritability (p < 0.0001), aggression (p = 0.0089), and anxiety (p < 0.0001), and an elevated motor score (p = 0.0098). Impulsivity, assessed in a subgroup of subjects, was also associated with SI (p = 0.0267). Hopelessness and anxiety remained significant in combined model (p < 0.001; p < 0.0198, respectively) even when motor score was included. Behavioral symptoms were significantly higher in those reporting SI. Hopelessness and anxiety showed a particularly strong association with SI. Risk identification could assist in assessment of suicidality in this group.
Rhodes, Darson L; Kirchofer, Gregg; Hammig, Bart J; Ogletree, Roberta J
2013-05-01
This study examined the impact of professional preparation and class structure on sexuality topics taught and use of practice-based instructional strategies in US middle and high school health classes. Data from the classroom-level file of the 2006 School Health Policies and Programs were used. A series of multivariable logistic regression models were employed to determine if sexuality content taught was dependent on professional preparation and /or class structure (HE only versus HE/another subject combined). Additional multivariable logistic regression models were employed to determine if use of practice-based instructional strategies was dependent upon professional preparation and/or class structure. Years of teaching health topics and size of the school district were included as covariates in the multivariable logistic regression models. Findings indicated professionally prepared health educators were significantly more likely to teach 7 of the 13 sexuality topics as compared to nonprofessionally prepared health educators. There was no statistically significant difference in the instructional strategies used by professionally prepared and nonprofessionally prepared health educators. Exclusively health education classes versus combined classes were significantly more likely to have included 6 of the 13 topics and to have incorporated practice-based instructional strategies in the curricula. This study indicated professional preparation and class structure impacted sexuality content taught. Class structure also impacted whether opportunities for students to practice skills were made available. Results support the need for continued advocacy for professionally prepared health educators and health only courses. © 2013, American School Health Association.
Jackman, Patrick; Sun, Da-Wen; Elmasry, Gamal
2012-08-01
A new algorithm for the conversion of device dependent RGB colour data into device independent L*a*b* colour data without introducing noticeable error has been developed. By combining a linear colour space transform and advanced multiple regression methodologies it was possible to predict L*a*b* colour data with less than 2.2 colour units of error (CIE 1976). By transforming the red, green and blue colour components into new variables that better reflect the structure of the L*a*b* colour space, a low colour calibration error was immediately achieved (ΔE(CAL) = 14.1). Application of a range of regression models on the data further reduced the colour calibration error substantially (multilinear regression ΔE(CAL) = 5.4; response surface ΔE(CAL) = 2.9; PLSR ΔE(CAL) = 2.6; LASSO regression ΔE(CAL) = 2.1). Only the PLSR models deteriorated substantially under cross validation. The algorithm is adaptable and can be easily recalibrated to any working computer vision system. The algorithm was tested on a typical working laboratory computer vision system and delivered only a very marginal loss of colour information ΔE(CAL) = 2.35. Colour features derived on this system were able to safely discriminate between three classes of ham with 100% correct classification whereas colour features measured on a conventional colourimeter were not. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Li-yong; Li, Le; Zhang, Zhi-hua
2016-09-01
Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.
QSAR Analysis of 2-Amino or 2-Methyl-1-Substituted Benzimidazoles Against Pseudomonas aeruginosa
Podunavac-Kuzmanović, Sanja O.; Cvetković, Dragoljub D.; Barna, Dijana J.
2009-01-01
A set of benzimidazole derivatives were tested for their inhibitory activities against the Gram-negative bacterium Pseudomonas aeruginosa and minimum inhibitory concentrations were determined for all the compounds. Quantitative structure activity relationship (QSAR) analysis was applied to fourteen of the abovementioned derivatives using a combination of various physicochemical, steric, electronic, and structural molecular descriptors. A multiple linear regression (MLR) procedure was used to model the relationships between molecular descriptors and the antibacterial activity of the benzimidazole derivatives. The stepwise regression method was used to derive the most significant models as a calibration model for predicting the inhibitory activity of this class of molecules. The best QSAR models were further validated by a leave one out technique as well as by the calculation of statistical parameters for the established theoretical models. To confirm the predictive power of the models, an external set of molecules was used. High agreement between experimental and predicted inhibitory values, obtained in the validation procedure, indicated the good quality of the derived QSAR models. PMID:19468332
The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...
The Effect of Urban Sprawls on Timber Harvesting
Stephen A. Barlow; Ian A Munn; David A. Cleaves; David L. Evans
1998-01-01
In Mississippi and Alabama, urban population growth is pushing development into rural areas. To study the impact of urbanization on timber harvesting, census and forest inventory data were combined in a geographic information system, and a logistic regression model was used to estimate the relationship between several variables and harvest probabilities....
Lee, Soon Jae; Cho, Yoo-Kyung; Na, Soo-Young; Choi, Eun Kwang; Boo, Sun Jin; Jeong, Seung Uk; Song, Hyung Joo; Kim, Heung Up; Kim, Bong Soo; Song, Byung-Cheol
2016-09-01
Some recent studies have found regression of liver cirrhosis after antiviral therapy in patients with hepatitis C virus (HCV)-related liver cirrhosis, but there have been no reports of complete regression of esophageal varices after interferon/peg-interferon and ribavirin combination therapy. We describe two cases of complete regression of esophageal varices and splenomegaly after interferon-alpha and ribavirin combination therapy in patients with HCV-related liver cirrhosis. Esophageal varices and splenomegaly regressed after 3 and 8 years of sustained virologic responses in cases 1 and 2, respectively. To our knowledge, this is the first study demonstrating that complications of liver cirrhosis, such as esophageal varices and splenomegaly, can regress after antiviral therapy in patients with HCV-related liver cirrhosis.
Vegetation Monitoring with Gaussian Processes and Latent Force Models
NASA Astrophysics Data System (ADS)
Camps-Valls, Gustau; Svendsen, Daniel; Martino, Luca; Campos, Manuel; Luengo, David
2017-04-01
Monitoring vegetation by biophysical parameter retrieval from Earth observation data is a challenging problem, where machine learning is currently a key player. Neural networks, kernel methods, and Gaussian Process (GP) regression have excelled in parameter retrieval tasks at both local and global scales. GP regression is based on solid Bayesian statistics, yield efficient and accurate parameter estimates, and provides interesting advantages over competing machine learning approaches such as confidence intervals. However, GP models are hampered by lack of interpretability, that prevented the widespread adoption by a larger community. In this presentation we will summarize some of our latest developments to address this issue. We will review the main characteristics of GPs and their advantages in vegetation monitoring standard applications. Then, three advanced GP models will be introduced. First, we will derive sensitivity maps for the GP predictive function that allows us to obtain feature ranking from the model and to assess the influence of examples in the solution. Second, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated radiative transfer data in a single GP model. The JGP regression provides more sensible confidence intervals for the predictions, respects the physics of the underlying processes, and allows for transferability across time and space. Finally, a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical models of the system is presented. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Empirical evidence of the performance of these models will be presented through illustrative examples.
Uljas, Heidi E.; Schaffner, Donald W.; Duffy, Siobain; Zhao, Lihui; Ingham, Steven C.
2001-01-01
Probabilistic models were used as a systematic approach to describe the response of Escherichia coli O157:H7 populations to combinations of commonly used preservation methods in unpasteurized apple cider. Using a complete factorial experimental design, the effect of pH (3.1 to 4.3), storage temperature and time (5 to 35°C for 0 to 6 h or 12 h), preservatives (0, 0.05, or 0.1% potassium sorbate or sodium benzoate), and freeze-thaw (F-T; −20°C, 48 h and 4°C, 4 h) treatment combinations (a total of 1,600 treatments) on the probability of achieving a 5-log10-unit reduction in a three-strain E. coli O157:H7 mixture in cider was determined. Using logistic regression techniques, pH, temperature, time, and concentration were modeled in separate segments of the data set, resulting in prediction equations for: (i) no preservatives, before F-T; (ii) no preservatives, after F-T; (iii) sorbate, before F-T; (iv) sorbate, after F-T; (v) benzoate, before F-T; and (vi) benzoate, after F-T. Statistical analysis revealed a highly significant (P < 0.0001) effect of all four variables, with cider pH being the most important, followed by temperature and time, and finally by preservative concentration. All models predicted 92 to 99% of the responses correctly. To ensure safety, use of the models is most appropriate at a 0.9 probability level, where the percentage of false positives, i.e., falsely predicting a 5-log10-unit reduction, is the lowest (0 to 4.4%). The present study demonstrates the applicability of logistic regression approaches to describing the effectiveness of multiple treatment combinations in pathogen control in cider making. The resulting models can serve as valuable tools in designing safe apple cider processes. PMID:11133437
Uljas, H E; Schaffner, D W; Duffy, S; Zhao, L; Ingham, S C
2001-01-01
Probabilistic models were used as a systematic approach to describe the response of Escherichia coli O157:H7 populations to combinations of commonly used preservation methods in unpasteurized apple cider. Using a complete factorial experimental design, the effect of pH (3. 1 to 4.3), storage temperature and time (5 to 35 degrees C for 0 to 6 h or 12 h), preservatives (0, 0.05, or 0.1% potassium sorbate or sodium benzoate), and freeze-thaw (F-T; -20 degrees C, 48 h and 4 degrees C, 4 h) treatment combinations (a total of 1,600 treatments) on the probability of achieving a 5-log(10)-unit reduction in a three-strain E. coli O157:H7 mixture in cider was determined. Using logistic regression techniques, pH, temperature, time, and concentration were modeled in separate segments of the data set, resulting in prediction equations for: (i) no preservatives, before F-T; (ii) no preservatives, after F-T; (iii) sorbate, before F-T; (iv) sorbate, after F-T; (v) benzoate, before F-T; and (vi) benzoate, after F-T. Statistical analysis revealed a highly significant (P < 0.0001) effect of all four variables, with cider pH being the most important, followed by temperature and time, and finally by preservative concentration. All models predicted 92 to 99% of the responses correctly. To ensure safety, use of the models is most appropriate at a 0.9 probability level, where the percentage of false positives, i.e., falsely predicting a 5-log(10)-unit reduction, is the lowest (0 to 4.4%). The present study demonstrates the applicability of logistic regression approaches to describing the effectiveness of multiple treatment combinations in pathogen control in cider making. The resulting models can serve as valuable tools in designing safe apple cider processes.
NASA Astrophysics Data System (ADS)
Abbaszadeh Afshar, Farideh; Ayoubi, Shamsollah; Besalatpour, Ali Asghar; Khademi, Hossein; Castrignano, Annamaria
2016-03-01
This study was conducted to estimate soil clay content in two depths using geophysical techniques (Ground Penetration Radar-GPR and Electromagnetic Induction-EMI) and ancillary variables (remote sensing and topographic data) in an arid region of the southeastern Iran. GPR measurements were performed throughout ten transects of 100 m length with the line spacing of 10 m, and the EMI measurements were done every 10 m on the same transect in six sites. Ten soil cores were sampled randomly in each site and soil samples were taken from the depth of 0-20 and 20-40 cm, and then the clay fraction of each of sixty soil samples was measured in the laboratory. Clay content was predicted using three different sets of properties including geophysical data, ancillary data, and a combination of both as inputs to multiple linear regressions (MLR) and decision tree-based algorithm of Chi-Squared Automatic Interaction Detection (CHAID) models. The results of the CHAID and MLR models with all combined data showed that geophysical data were the most important variables for the prediction of clay content in two depths in the study area. The proposed MLR model, using the combined data, could explain only 0.44 and 0.31% of the total variability of clay content in 0-20 and 20-40 cm depths, respectively. Also, the coefficient of determination (R2) values for the clay content prediction, using the constructed CHAID model with the combined data, was 0.82 and 0.76 in 0-20 and 20-40 cm depths, respectively. CHAID models, therefore, showed a greater potential in predicting soil clay content from geophysical and ancillary data, while traditional regression methods (i.e. the MLR models) did not perform as well. Overall, the results may encourage researchers in using georeferenced GPR and EMI data as ancillary variables and CHAID algorithm to improve the estimation of soil clay content.
Yao, Longbiao; Heuser-Baker, Janet; Herlea-Pana, Oana; Iida, Ryuji; Wang, Qilong; Zou, Ming-Hui; Barlic-Dicen, Jana
2012-01-01
The major event initiating atherosclerosis is hypercholesterolemia-induced disruption of vascular endothelium integrity. In settings of endothelial damage, endothelial progenitor cells (EPCs) are mobilized from bone marrow into circulation and home to sites of vascular injury where they aid endothelial regeneration. Given the beneficial effects of EPCs in vascular repair, we hypothesized that these cells play a pivotal role in atherosclerosis regression. We tested our hypothesis in the atherosclerosis-prone mouse model in which hypercholesterolemia, one of the main factors affecting EPC homeostasis, is reversible (Reversa mice). In these mice normalization of plasma lipids decreased atherosclerotic burden; however, plaque regression was incomplete. To explore whether endothelial progenitors contribute to atherosclerosis regression, bone marrow EPCs from a transgenic strain expressing green fluorescent protein under the control of endothelial cell-specific Tie2 promoter (Tie2-GFP+) were isolated. These cells were then adoptively transferred into atheroregressing Reversa recipients where they augmented plaque regression induced by reversal of hypercholesterolemia. Advanced plaque regression correlated with engraftment of Tie2-GFP+ EPCs into endothelium and resulted in an increase in atheroprotective nitric oxide and improved vascular relaxation. Similarly augmented plaque regression was also detected in regressing Reversa mice treated with the stem cell mobilizer AMD3100 which also mobilizes EPCs to peripheral blood. We conclude that correction of hypercholesterolemia in Reversa mice leads to partial plaque regression that can be augmented by AMD3100 treatment or by adoptive transfer of EPCs. This suggests that direct cell therapy or indirect progenitor cell mobilization therapy may be used in combination with statins to treat atherosclerosis. PMID:23081735
Soft computing techniques toward modeling the water supplies of Cyprus.
Iliadis, L; Maris, F; Tachos, S
2011-10-01
This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Analytical and regression models of glass rod drawing process
NASA Astrophysics Data System (ADS)
Alekseeva, L. B.
2018-03-01
The process of drawing glass rods (light guides) is being studied. The parameters of the process affecting the quality of the light guide have been determined. To solve the problem, mathematical models based on general equations of continuum mechanics are used. The conditions for the stable flow of the drawing process have been found, which are determined by the stability of the motion of the glass mass in the formation zone to small uncontrolled perturbations. The sensitivity of the formation zone to perturbations of the drawing speed and viscosity is estimated. Experimental models of the drawing process, based on the regression analysis methods, have been obtained. These models make it possible to customize a specific production process to obtain light guides of the required quality. They allow one to find the optimum combination of process parameters in the chosen area and to determine the required accuracy of maintaining them at a specified level.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Bentamapimod (JNK Inhibitor AS602801) Induces Regression of Endometriotic Lesions in Animal Models.
Palmer, Stephen S; Altan, Melis; Denis, Deborah; Tos, Enrico Gillio; Gotteland, Jean-Pierre; Osteen, Kevin G; Bruner-Tran, Kaylon L; Nataraja, Selvaraj G
2016-01-01
Endometriosis is an estrogen (ER)-dependent gynecological disease caused by the growth of endometrial tissue at extrauterine sites. Current endocrine therapies address the estrogenic aspect of disease and offer some relief from pain but are associated with significant side effects. Immune dysfunction is also widely believed to be an underlying contributor to the pathogenesis of this disease. This study evaluated an inhibitor of c-Jun N-terminal kinase, bentamapimod (AS602801), which interrupts immune pathways, in 2 rodent endometriosis models. Treatment of nude mice bearing xenografts biopsied from women with endometriosis (BWE) with 30 mg/kg AS602801 caused 29% regression of lesion. Medroxyprogesterone acetate (MPA) or progesterone (PR) alone did not cause regression of BWE lesions, but combining 10 mg/kg AS602801 with MPA caused 38% lesion regression. In human endometrial organ cultures (from healthy women), treatment with AS602801 or MPA reduced matrix metalloproteinase-3 (MMP-3) release into culture medium. In organ cultures established with BWE, PR or MPA failed to inhibit MMP-3 secretion, whereas AS602801 alone or MPA + AS602801 suppressed MMP-3 production. In an autologous rat endometriosis model, AS602801 caused 48% regression of lesions compared to GnRH antagonist Antide (84%). AS602801 reduced inflammatory cytokines in endometriotic lesions, while levels of cytokines in ipsilateral horns were unaffected. Furthermore, AS602801 enhanced natural killer cell activity, without apparent negative effects on uterus. These results indicate that bentamapimod induced regression of endometriotic lesions in endometriosis rodent animal models without suppressing ER action. c-Jun N-terminal kinase inhibition mediated a comprehensive reduction in cytokine secretion and moreover was able to overcome PR resistance. © The Author(s) 2015.
Huang, Yu; Griffin, Michael J
2014-01-01
This study investigated the prediction of the discomfort caused by simultaneous noise and vibration from the discomfort caused by noise and the discomfort caused by vibration when they are presented separately. A total of 24 subjects used absolute magnitude estimation to report their discomfort caused by seven levels of noise (70-88 dBA SEL), 7 magnitudes of vibration (0.146-2.318 ms(- 1.75)) and all 49 possible combinations of these noise and vibration stimuli. Vibration did not significantly influence judgements of noise discomfort, but noise reduced vibration discomfort by an amount that increased with increasing noise level, consistent with a 'masking effect' of noise on judgements of vibration discomfort. A multiple linear regression model or a root-sums-of-squares model predicted the discomfort caused by combined noise and vibration, but the root-sums-of-squares model is more convenient and provided a more accurate prediction of the discomfort produced by combined noise and vibration.
NASA Astrophysics Data System (ADS)
Mekanik, F.; Imteaz, M. A.; Gato-Trinidad, S.; Elmahdi, A.
2013-10-01
In this study, the application of Artificial Neural Networks (ANN) and Multiple regression analysis (MR) to forecast long-term seasonal spring rainfall in Victoria, Australia was investigated using lagged El Nino Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) as potential predictors. The use of dual (combined lagged ENSO-IOD) input sets for calibrating and validating ANN and MR Models is proposed to investigate the simultaneous effect of past values of these two major climate modes on long-term spring rainfall prediction. The MR models that did not violate the limits of statistical significance and multicollinearity were selected for future spring rainfall forecast. The ANN was developed in the form of multilayer perceptron using Levenberg-Marquardt algorithm. Both MR and ANN modelling were assessed statistically using mean square error (MSE), mean absolute error (MAE), Pearson correlation (r) and Willmott index of agreement (d). The developed MR and ANN models were tested on out-of-sample test sets; the MR models showed very poor generalisation ability for east Victoria with correlation coefficients of -0.99 to -0.90 compared to ANN with correlation coefficients of 0.42-0.93; ANN models also showed better generalisation ability for central and west Victoria with correlation coefficients of 0.68-0.85 and 0.58-0.97 respectively. The ability of multiple regression models to forecast out-of-sample sets is compatible with ANN for Daylesford in central Victoria and Kaniva in west Victoria (r = 0.92 and 0.67 respectively). The errors of the testing sets for ANN models are generally lower compared to multiple regression models. The statistical analysis suggest the potential of ANN over MR models for rainfall forecasting using large scale climate modes.
Churpek, Matthew M; Yuen, Trevor C; Winslow, Christopher; Meltzer, David O; Kattan, Michael W; Edelson, Dana P
2016-02-01
Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. Observational cohort study. Five hospitals, from November 2008 until January 2013. Hospitalized ward patients None Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score, a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC, 0.80 [95% CI, 0.80-0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC, 0.77 vs 0.74; p < 0.01), and all models were more accurate than the MEWS (AUC, 0.70 [95% CI, 0.70-0.70]). In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards.
Giménez-Espert, María Del Carmen; Prado-Gascó, Vicente Javier
2018-03-01
To analyse link between empathy and emotional intelligence as a predictor of nurses' attitudes towards communication while comparing the contribution of emotional aspects and attitudinal elements on potential behaviour. Nurses' attitudes towards communication, empathy and emotional intelligence are key skills for nurses involved in patient care. There are currently no studies analysing this link, and its investigation is needed because attitudes may influence communication behaviours. Correlational study. To attain this goal, self-reported instruments (attitudes towards communication of nurses, trait emotional intelligence (Trait Emotional Meta-Mood Scale) and Jefferson Scale of Nursing Empathy (Jefferson Scale Nursing Empathy) were collected from 460 nurses between September 2015-February 2016. Two different analytical methodologies were used: traditional regression models and fuzzy-set qualitative comparative analysis models. The results of the regression model suggest that cognitive dimensions of attitude are a significant and positive predictor of the behavioural dimension. The perspective-taking dimension of empathy and the emotional-clarity dimension of emotional intelligence were significant positive predictors of the dimensions of attitudes towards communication, except for the affective dimension (for which the association was negative). The results of the fuzzy-set qualitative comparative analysis models confirm that the combination of high levels of cognitive dimension of attitudes, perspective-taking and emotional clarity explained high levels of the behavioural dimension of attitude. Empathy and emotional intelligence are predictors of nurses' attitudes towards communication, and the cognitive dimension of attitude is a good predictor of the behavioural dimension of attitudes towards communication of nurses in both regression models and fuzzy-set qualitative comparative analysis. In general, the fuzzy-set qualitative comparative analysis models appear to be better predictors than the regression models are. To evaluate current practices, establish intervention strategies and evaluate their effectiveness. The evaluation of these variables and their relationships are important in creating a satisfied and sustainable workforce and improving quality of care and patient health. © 2018 John Wiley & Sons Ltd.
A rational model of function learning.
Lucas, Christopher G; Griffiths, Thomas L; Williams, Joseph J; Kalish, Michael L
2015-10-01
Theories of how people learn relationships between continuous variables have tended to focus on two possibilities: one, that people are estimating explicit functions, or two that they are performing associative learning supported by similarity. We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear regression and Gaussian processes, which provide a probabilistic basis for similarity-based function learning, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use this insight to define a rational model of human function learning that combines the strengths of both approaches and accounts for a wide variety of experimental results.
Li, Haojie; Graham, Daniel J
2016-08-01
This paper estimates the causal effect of 20mph zones on road casualties in London. Potential confounders in the key relationship of interest are included within outcome regression and propensity score models, and the models are then combined to form a doubly robust estimator. A total of 234 treated zones and 2844 potential control zones are included in the data sample. The propensity score model is used to select a viable control group which has common support in the covariate distributions. We compare the doubly robust estimates with those obtained using three other methods: inverse probability weighting, regression adjustment, and propensity score matching. The results indicate that 20mph zones have had a significant causal impact on road casualty reduction in both absolute and proportional terms. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abunama, Taher; Othman, Faridah
2017-06-01
Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.
García Nieto, P J; Alonso Fernández, J R; de Cos Juez, F J; Sánchez Lasheras, F; Díaz Muñiz, C
2013-04-01
Cyanotoxins, a kind of poisonous substances produced by cyanobacteria, are responsible for health risks in drinking and recreational waters. As a result, anticipate its presence is a matter of importance to prevent risks. The aim of this study is to use a hybrid approach based on support vector regression (SVR) in combination with genetic algorithms (GAs), known as a genetic algorithm support vector regression (GA-SVR) model, in forecasting the cyanotoxins presence in the Trasona reservoir (Northern Spain). The GA-SVR approach is aimed at highly nonlinear biological problems with sharp peaks and the tests carried out proved its high performance. Some physical-chemical parameters have been considered along with the biological ones. The results obtained are two-fold. In the first place, the significance of each biological and physical-chemical variable on the cyanotoxins presence in the reservoir is determined with success. Finally, a predictive model able to forecast the possible presence of cyanotoxins in a short term was obtained. Copyright © 2013 Elsevier Inc. All rights reserved.
Rahman, Anisur; Faqeerzada, Mohammad A; Cho, Byoung-Kwan
2018-03-14
Allicin and soluble solid content (SSC) in garlic is the responsible for its pungent flavor and odor. However, current conventional methods such as the use of high-pressure liquid chromatography and a refractometer have critical drawbacks in that they are time-consuming, labor-intensive and destructive procedures. The present study aimed to predict allicin and SSC in garlic using hyperspectral imaging in combination with variable selection algorithms and calibration models. Hyperspectral images of 100 garlic cloves were acquired that covered two spectral ranges, from which the mean spectra of each clove were extracted. The calibration models included partial least squares (PLS) and least squares-support vector machine (LS-SVM) regression, as well as different spectral pre-processing techniques, from which the highest performing spectral preprocessing technique and spectral range were selected. Then, variable selection methods, such as regression coefficients, variable importance in projection (VIP) and the successive projections algorithm (SPA), were evaluated for the selection of effective wavelengths (EWs). Furthermore, PLS and LS-SVM regression methods were applied to quantitatively predict the quality attributes of garlic using the selected EWs. Of the established models, the SPA-LS-SVM model obtained an Rpred2 of 0.90 and standard error of prediction (SEP) of 1.01% for SSC prediction, whereas the VIP-LS-SVM model produced the best result with an Rpred2 of 0.83 and SEP of 0.19 mg g -1 for allicin prediction in the range 1000-1700 nm. Furthermore, chemical images of garlic were developed using the best predictive model to facilitate visualization of the spatial distributions of allicin and SSC. The present study clearly demonstrates that hyperspectral imaging combined with an appropriate chemometrics method can potentially be employed as a fast, non-invasive method to predict the allicin and SSC in garlic. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
Assessing NARCCAP climate model effects using spatial confidence regions.
French, Joshua P; McGinnis, Seth; Schwartzman, Armin
2017-01-01
We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference.
Lamborn, Susie D; Felbab, Amanda J
2003-10-01
This study evaluated both the parenting styles and family ecologies models with interview responses from 93 14- and 15-year-old African-American adolescents. The parenting styles model was more strongly represented in both open-ended and structured interview responses. Using variables from the structured interview as independent variables, regression analyses contrasted each model with a joint model for predicting self-esteem, self-reliance, work orientation, and ethnic identity. Overall, the findings suggest that a joint model that combines elements from both models provides a richer understanding of African-American families.
NASA Astrophysics Data System (ADS)
Mai, W.; Zhang, J.-F.; Zhao, X.-M.; Li, Z.; Xu, Z.-W.
2017-11-01
Wastewater from the dye industry is typically analyzed using a standard method for measurement of chemical oxygen demand (COD) or by a single-wavelength spectroscopic method. To overcome the disadvantages of these methods, ultraviolet-visible (UV-Vis) spectroscopy was combined with principal component regression (PCR) and partial least squares regression (PLSR) in this study. Unlike the standard method, this method does not require digestion of the samples for preparation. Experiments showed that the PLSR model offered high prediction performance for COD, with a mean relative error of about 5% for two dyes. This error is similar to that obtained with the standard method. In this study, the precision of the PLSR model decreased with the number of dye compounds present. It is likely that multiple models will be required in reality, and the complexity of a COD monitoring system would be greatly reduced if the PLSR model is used because it can include several dyes. UV-Vis spectroscopy with PLSR successfully enhanced the performance of COD prediction for dye wastewater and showed good potential for application in on-line water quality monitoring.
Burkin, M M; Molchanova, E V
To assess an impact of indicators of social stress on demographic processes in regions of the Russian Federation using statistical methods. The data of Rosstat «Regions of Russia» and «Health care in Russia» were used as information base. Indicators of about 80 subjects of the Russian Federation (without autonomous areas) for the ten-year period (2005-2014) have been created in the form of the database consisting of the following blocks: medico-demographic situation, level of economic development of the territory and wellbeing of the population, development of social infrastructure, ecological and climatic conditions, scientific researches and innovations. In total, there were about 70 indicators. Panel data for 80 regions of Russia in 10 years, which combine both indicators of spatial type (cross-section data), and information on temporary ranks (time-series data), were used. Various models of regression according to the panel data have been realized: the integrated model of regression (pooled model), regression model with the fixed effects (fixed effect model), regression model with random effects (random effect model). Main demographic indicators (life expectancy, birth rate, mortality from the external reasons) are to a great extent connected with socio-economic factors. Social tension (social stress) caused by transition to market economy plays an important role. The integral assessment of the impact of the average per capita monetary income, incidence of alcoholism and alcoholic psychoses, criminality, sales volume of alcoholic beverages per capita and marriage relations on demographic indicators is presented. Results of modeling allow to define the priority directions in the field of development of mental health and psychotherapeutic services in the regions of the Russian Federation.
Time series forecasting using ERNN and QR based on Bayesian model averaging
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui
2016-01-01
Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555
Sternberg, Maya R; Schleicher, Rosemary L; Pfeiffer, Christine M
2013-06-01
The collection of articles in this supplement issue provides insight into the association of various covariates with concentrations of biochemical indicators of diet and nutrition (biomarkers), beyond age, race, and sex, using linear regression. We studied 10 specific sociodemographic and lifestyle covariates in combination with 29 biomarkers from NHANES 2003-2006 for persons aged ≥ 20 y. The covariates were organized into 2 sets or "chunks": sociodemographic (age, sex, race-ethnicity, education, and income) and lifestyle (dietary supplement use, smoking, alcohol consumption, BMI, and physical activity) and fit in hierarchical fashion by using each category or set of related variables to determine how covariates, jointly, are related to biomarker concentrations. In contrast to many regression modeling applications, all variables were retained in a full regression model regardless of significance to preserve the interpretation of the statistical properties of β coefficients, P values, and CIs and to keep the interpretation consistent across a set of biomarkers. The variables were preselected before data analysis, and the data analysis plan was designed at the outset to minimize the reporting of false-positive findings by limiting the amount of preliminary hypothesis testing. Although we generally found that demographic differences seen in biomarkers were over- or underestimated when ignoring other key covariates, the demographic differences generally remained significant after adjusting for sociodemographic and lifestyle variables. These articles are intended to provide a foundation to researchers to help them generate hypotheses for future studies or data analyses and/or develop predictive regression models using the wealth of NHANES data.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
NASA Astrophysics Data System (ADS)
Das, Bappa; Sahoo, Rabi N.; Pargal, Sourabh; Krishna, Gopal; Verma, Rakesh; Chinnusamy, Viswanathan; Sehgal, Vinay K.; Gupta, Vinod K.; Dash, Sushanta K.; Swain, Padmini
2018-03-01
In the present investigation, the changes in sucrose, reducing and total sugar content due to water-deficit stress in rice leaves were modeled using visible, near infrared (VNIR) and shortwave infrared (SWIR) spectroscopy. The objectives of the study were to identify the best vegetation indices and suitable multivariate technique based on precise analysis of hyperspectral data (350 to 2500 nm) and sucrose, reducing sugar and total sugar content measured at different stress levels from 16 different rice genotypes. Spectral data analysis was done to identify suitable spectral indices and models for sucrose estimation. Novel spectral indices in near infrared (NIR) range viz. ratio spectral index (RSI) and normalised difference spectral indices (NDSI) sensitive to sucrose, reducing sugar and total sugar content were identified which were subsequently calibrated and validated. The RSI and NDSI models had R2 values of 0.65, 0.71 and 0.67; RPD values of 1.68, 1.95 and 1.66 for sucrose, reducing sugar and total sugar, respectively for validation dataset. Different multivariate spectral models such as artificial neural network (ANN), multivariate adaptive regression splines (MARS), multiple linear regression (MLR), partial least square regression (PLSR), random forest regression (RFR) and support vector machine regression (SVMR) were also evaluated. The best performing multivariate models for sucrose, reducing sugars and total sugars were found to be, MARS, ANN and MARS, respectively with respect to RPD values of 2.08, 2.44, and 1.93. Results indicated that VNIR and SWIR spectroscopy combined with multivariate calibration can be used as a reliable alternative to conventional methods for measurement of sucrose, reducing sugars and total sugars of rice under water-deficit stress as this technique is fast, economic, and noninvasive.
Computing group cardinality constraint solutions for logistic regression problems.
Zhang, Yong; Kwon, Dongjin; Pohl, Kilian M
2017-01-01
We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e.g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i.e., ones relaxing group cardinality constraints. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Qi-Quan; Wang, Chang-Quan; Zhang, Wen-Jiang; Yu, Yong; Li, Bing; Yang, Juan; Bai, Gen-Chuan; Cai, Yan
2013-02-01
In this study, a radial basis function neural network model combined with ordinary kriging (RBFNN_OK) was adopted to predict the spatial distribution of soil nutrients (organic matter and total N) in a typical hilly region of Sichuan Basin, Southwest China, and the performance of this method was compared with that of ordinary kriging (OK) and regression kriging (RK). All the three methods produced the similar soil nutrient maps. However, as compared with those obtained by multiple linear regression model, the correlation coefficients between the measured values and the predicted values of soil organic matter and total N obtained by neural network model increased by 12. 3% and 16. 5% , respectively, suggesting that neural network model could more accurately capture the complicated relationships between soil nutrients and quantitative environmental factors. The error analyses of the prediction values of 469 validation points indicated that the mean absolute error (MAE) , mean relative error (MRE), and root mean squared error (RMSE) of RBFNN_OK were 6.9%, 7.4%, and 5. 1% (for soil organic matter), and 4.9%, 6.1% , and 4.6% (for soil total N) smaller than those of OK (P<0.01), and 2.4%, 2.6% , and 1.8% (for soil organic matter), and 2.1%, 2.8%, and 2.2% (for soil total N) smaller than those of RK, respectively (P<0.05).
Zhou, Yang; Fu, Xiaping; Ying, Yibin; Fang, Zhenhuan
2015-06-23
A fiber-optic probe system was developed to estimate the optical properties of turbid media based on spatially resolved diffuse reflectance. Because of the limitations in numerical calculation of radiative transfer equation (RTE), diffusion approximation (DA) and Monte Carlo simulations (MC), support vector regression (SVR) was introduced to model the relationship between diffuse reflectance values and optical properties. The SVR models of four collection fibers were trained by phantoms in calibration set with a wide range of optical properties which represented products of different applications, then the optical properties of phantoms in prediction set were predicted after an optimal searching on SVR models. The results indicated that the SVR model was capable of describing the relationship with little deviation in forward validation. The correlation coefficient (R) of reduced scattering coefficient μ'(s) and absorption coefficient μ(a) in the prediction set were 0.9907 and 0.9980, respectively. The root mean square errors of prediction (RMSEP) of μ'(s) and μ(a) in inverse validation were 0.411 cm(-1) and 0.338 cm(-1), respectively. The results indicated that the integrated fiber-optic probe system combined with SVR model were suitable for fast and accurate estimation of optical properties of turbid media based on spatially resolved diffuse reflectance. Copyright © 2015 Elsevier B.V. All rights reserved.
Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya
2003-01-01
The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.
Wilderness recreation participation: Projections for the next half century
J. M. Bowker; D. Murphy; H. K. Cordell; D. B. K. English; J. C. Bergstrom; C. M. Starbuck; C. J. Betz; G. T. Green; P. Reed
2007-01-01
This paper explores the influence of demographic and spatial variables on individual participation in wildland area recreation. Data from the National Survey on Recreation and the Environment (NSRE) are combined with GIS-based distance measures to develop nonlinear regression models used to predict both participation and the number of days of participation in...
Predicting site locations for biomass using facilities with Bayesian methods
Timothy M. Young; James H. Perdue; Xia Huang
2017-01-01
Logistic regression models combined with Bayesian inference were developed to predict locations and quantify factors that influence the siting of biomass-using facilities that use woody biomass in the Southeastern United States. Predictions were developed for two groups of mills, one representing larger capacity mills similar to pulp and paper mills (Group II...
ERIC Educational Resources Information Center
Biederman, Gerald B.; Davey, Valerie A.
1995-01-01
This counterpoint responds to Phillip Ward's commentary on a study by Gerald Biederman and others which concluded that verbal reinforcement in combination with interactive modeling strategies may produce confusion in children with language/learning difficulties. The counterpoint argues that the commentary indicates problems with semantics and with…
Nonparametric regression applied to quantitative structure-activity relationships
Constans; Hirst
2000-03-01
Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.
Drying kinetics and characteristics of combined infrared-vacuum drying of button mushroom slices
NASA Astrophysics Data System (ADS)
Salehi, Fakhreddin; Kashaninejad, Mahdi; Jafarianlari, Ali
2017-05-01
Infrared-vacuum drying characteristics of button mushroom ( Agaricus bisporus) were evaluated in a combined dryer system. The effects of drying parameters, including infrared radiation power (150-375 W), system pressure (5-15 kPa) and time (0-160 min) on the drying kinetics and characteristics of button mushroom slices were investigated. Both the infrared lamp power and vacuum pressure influenced the drying time of button mushroom slices. The rate constants of the nine different kinetic's models for thin layer drying were established by nonlinear regression analysis of the experimental data which were found to be affected mainly by the infrared power level while system pressure had a little effect on the moisture ratios. The regression results showed that the Page model satisfactorily described the drying behavior of button mushroom slices with highest R value and lowest SE values. The effective moisture diffusivity increases as power increases and range between 0.83 and 2.33 × 10-9 m2/s. The rise in infrared power has a negative effect on the ΔE and with increasing in infrared radiation power it was increased.
Teoh, Shao Thing; Kitamura, Miki; Nakayama, Yasumune; Putri, Sastia; Mukai, Yukio; Fukusaki, Eiichiro
2016-08-01
In recent years, the advent of high-throughput omics technology has made possible a new class of strain engineering approaches, based on identification of possible gene targets for phenotype improvement from omic-level comparison of different strains or growth conditions. Metabolomics, with its focus on the omic level closest to the phenotype, lends itself naturally to this semi-rational methodology. When a quantitative phenotype such as growth rate under stress is considered, regression modeling using multivariate techniques such as partial least squares (PLS) is often used to identify metabolites correlated with the target phenotype. However, linear modeling techniques such as PLS require a consistent metabolite-phenotype trend across the samples, which may not be the case when outliers or multiple conflicting trends are present in the data. To address this, we proposed a data-mining strategy that utilizes random sample consensus (RANSAC) to select subsets of samples with consistent trends for construction of better regression models. By applying a combination of RANSAC and PLS (RANSAC-PLS) to a dataset from a previous study (gas chromatography/mass spectrometry metabolomics data and 1-butanol tolerance of 19 yeast mutant strains), new metabolites were indicated to be correlated with tolerance within certain subsets of the samples. The relevance of these metabolites to 1-butanol tolerance were then validated from single-deletion strains of corresponding metabolic genes. The results showed that RANSAC-PLS is a promising strategy to identify unique metabolites that provide additional hints for phenotype improvement, which could not be detected by traditional PLS modeling using the entire dataset. Copyright © 2016 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Predicting biological condition in southern California streams
Brown, Larry R.; May, Jason T.; Rehn, Andrew C.; Ode, Peter R.; Waite, Ian R.; Kennen, Jonathan G.
2012-01-01
As understanding of the complex relations among environmental stressors and biological responses improves, a logical next step is predictive modeling of biological condition at unsampled sites. We developed a boosted regression tree (BRT) model of biological condition, as measured by a benthic macroinvertebrate index of biotic integrity (BIBI), for streams in urbanized Southern Coastal California. We also developed a multiple linear regression (MLR) model as a benchmark for comparison with the BRT model. The BRT model explained 66% of the variance in B-IBI, identifying watershed population density and combined percentage agricultural and urban land cover in the riparian buffer as the most important predictors of B-IBI, but with watershed mean precipitation and watershed density of manmade channels also important. The MLR model explained 48% of the variance in B-IBI and included watershed population density and combined percentage agricultural and urban land cover in the riparian buffer. For a verification data set, the BRT model correctly classified 75% of impaired sites (B-IBI < 40) and 78% of unimpaired sites (B-IBI = 40). For the same verification data set, the MLR model correctly classified 69% of impaired sites and 87% of unimpaired sites. The BRT model should not be used to predict B-IBI for specific sites; however, the model can be useful for general applications such as identifying and prioritizing regions for monitoring, remediation or preservation, stratifying new bioassessments according to anticipated biological condition, or assessing the potential for change in stream biological condition based on anticipated changes in population density and development in stream buffers.
Numerical investigation on the regression rate of hybrid rocket motor with star swirl fuel grain
NASA Astrophysics Data System (ADS)
Zhang, Shuai; Hu, Fan; Zhang, Weihua
2016-10-01
Although hybrid rocket motor is prospected to have distinct advantages over liquid and solid rocket motor, low regression rate and insufficient efficiency are two major disadvantages which have prevented it from being commercially viable. In recent years, complex fuel grain configurations are attractive in overcoming the disadvantages with the help of Rapid Prototyping technology. In this work, an attempt has been made to numerically investigate the flow field characteristics and local regression rate distribution inside the hybrid rocket motor with complex star swirl grain. A propellant combination with GOX and HTPB has been chosen. The numerical model is established based on the three dimensional Navier-Stokes equations with turbulence, combustion, and coupled gas/solid phase formulations. The calculated fuel regression rate is compared with the experimental data to validate the accuracy of numerical model. The results indicate that, comparing the star swirl grain with the tube grain under the conditions of the same port area and the same grain length, the burning surface area rises about 200%, the spatially averaged regression rate rises as high as about 60%, and the oxidizer can combust sufficiently due to the big vortex around the axis in the aft-mixing chamber. The combustion efficiency of star swirl grain is better and more stable than that of tube grain.
Wilderjans, Tom Frans; Vande Gaer, Eva; Kiers, Henk A L; Van Mechelen, Iven; Ceulemans, Eva
2017-03-01
In the behavioral sciences, many research questions pertain to a regression problem in that one wants to predict a criterion on the basis of a number of predictors. Although in many cases, ordinary least squares regression will suffice, sometimes the prediction problem is more challenging, for three reasons: first, multiple highly collinear predictors can be available, making it difficult to grasp their mutual relations as well as their relations to the criterion. In that case, it may be very useful to reduce the predictors to a few summary variables, on which one regresses the criterion and which at the same time yields insight into the predictor structure. Second, the population under study may consist of a few unknown subgroups that are characterized by different regression models. Third, the obtained data are often hierarchically structured, with for instance, observations being nested into persons or participants within groups or countries. Although some methods have been developed that partially meet these challenges (i.e., principal covariates regression (PCovR), clusterwise regression (CR), and structural equation models), none of these methods adequately deals with all of them simultaneously. To fill this gap, we propose the principal covariates clusterwise regression (PCCR) method, which combines the key idea's behind PCovR (de Jong & Kiers in Chemom Intell Lab Syst 14(1-3):155-164, 1992) and CR (Späth in Computing 22(4):367-373, 1979). The PCCR method is validated by means of a simulation study and by applying it to cross-cultural data regarding satisfaction with life.
New Approach To Hour-By-Hour Weather Forecast
NASA Astrophysics Data System (ADS)
Liao, Q. Q.; Wang, B.
2017-12-01
Fine hourly forecast in single station weather forecast is required in many human production and life application situations. Most previous MOS (Model Output Statistics) which used a linear regression model are hard to solve nonlinear natures of the weather prediction and forecast accuracy has not been sufficient at high temporal resolution. This study is to predict the future meteorological elements including temperature, precipitation, relative humidity and wind speed in a local region over a relatively short period of time at hourly level. By means of hour-to-hour NWP (Numeral Weather Prediction)meteorological field from Forcastio (https://darksky.net/dev/docs/forecast) and real-time instrumental observation including 29 stations in Yunnan and 3 stations in Tianjin of China from June to October 2016, predictions are made of the 24-hour hour-by-hour ahead. This study presents an ensemble approach to combine the information of instrumental observation itself and NWP. Use autoregressive-moving-average (ARMA) model to predict future values of the observation time series. Put newest NWP products into the equations derived from the multiple linear regression MOS technique. Handle residual series of MOS outputs with autoregressive (AR) model for the linear property presented in time series. Due to the complexity of non-linear property of atmospheric flow, support vector machine (SVM) is also introduced . Therefore basic data quality control and cross validation makes it able to optimize the model function parameters , and do 24 hours ahead residual reduction with AR/SVM model. Results show that AR model technique is better than corresponding multi-variant MOS regression method especially at the early 4 hours when the predictor is temperature. MOS-AR combined model which is comparable to MOS-SVM model outperform than MOS. Both of their root mean square error and correlation coefficients for 2 m temperature are reduced to 1.6 degree Celsius and 0.91 respectively. The forecast accuracy of 24- hour forecast deviation no more than 2 degree Celsius is 78.75 % for MOS-AR model and 81.23 % for AR model.
[Spatial differentiation and impact factors of Yutian Oasis's soil surface salt based on GWR model].
Yuan, Yu Yun; Wahap, Halik; Guan, Jing Yun; Lu, Long Hui; Zhang, Qin Qin
2016-10-01
In this paper, topsoil salinity data gathered from 24 sampling sites in the Yutian Oasis were used, nine different kinds of environmental variables closely related to soil salinity were selec-ted as influencing factors, then, the spatial distribution characteristics of topsoil salinity and spatial heterogeneity of influencing factors were analyzed by combining the spatial autocorrelation with traditional regression analysis and geographically weighted regression model. Results showed that the topsoil salinity in Yutian Oasis was not of random distribution but had strong spatial dependence, and the spatial autocorrelation index for topsoil salinity was 0.479. Groundwater salinity, groundwater depth, elevation and temperature were the main factors influencing topsoil salt accumulation in arid land oases and they were spatially heterogeneous. The nine selected environmental variables except soil pH had significant influences on topsoil salinity with spatial disparity. GWR model was superior to the OLS model on interpretation and estimation of spatial non-stationary data, also had a remarkable advantage in visualization of modeling parameters.
Kim, J; Nagano, Y; Furumai, H
2012-01-01
Easy-to-measure surrogate parameters for water quality indicators are needed for real time monitoring as well as for generating data for model calibration and validation. In this study, a novel linear regression model for estimating total nitrogen (TN) based on two surrogate parameters is proposed based on evaluation of pollutant loads flowing into a eutrophic lake. Based on their runoff characteristics during wet weather, electric conductivity (EC) and turbidity were selected as surrogates for particulate nitrogen (PN) and dissolved nitrogen (DN), respectively. Strong linear relationships were established between PN and turbidity and DN and EC, and both models subsequently combined for estimation of TN. This model was evaluated by comparison of estimated and observed TN runoff loads during rainfall events. This analysis showed that turbidity and EC are viable surrogates for PN and DN, respectively, and that the linear regression model for TN concentration was successful in estimating TN runoff loads during rainfall events and also under dry weather conditions.
Structure-activity relationships between sterols and their thermal stability in oil matrix.
Hu, Yinzhou; Xu, Junli; Huang, Weisu; Zhao, Yajing; Li, Maiquan; Wang, Mengmeng; Zheng, Lufei; Lu, Baiyi
2018-08-30
Structure-activity relationships between 20 sterols and their thermal stabilities were studied in a model oil system. All sterol degradations were found to be consistent with a first-order kinetic model with determination of coefficient (R 2 ) higher than 0.9444. The number of double bonds in the sterol structure was negatively correlated with the thermal stability of sterol, whereas the length of the branch chain was positively correlated with the thermal stability of sterol. A quantitative structure-activity relationship (QSAR) model to predict thermal stability of sterol was developed by using partial least squares regression (PLSR) combined with genetic algorithm (GA). A regression model was built with R 2 of 0.806. Almost all sterol degradation constants can be predicted accurately with R 2 of cross-validation equals to 0.680. Four important variables were selected in optimal QSAR model and the selected variables were observed to be related with information indices, RDF descriptors, and 3D-MoRSE descriptors. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Sung-Wei; Wang, Po-Chuan; Hsin, Ping-Lung; Oates, Anthony; Sun, I-Wen; Liu, Shen-Ing
2011-01-01
Microelectronic engineers are considered valuable human capital contributing significantly toward economic development, but they may encounter stressful work conditions in the context of a globalized industry. The study aims at identifying risk factors of depressive disorders primarily based on job stress models, the Demand-Control-Support and Effort-Reward Imbalance models, and at evaluating whether depressive disorders impair work performance in microelectronics engineers in Taiwan. The case-control study was conducted among 678 microelectronics engineers, 452 controls and 226 cases with depressive disorders which were defined by a score 17 or more on the Beck Depression Inventory and a psychiatrist's diagnosis. The self-administered questionnaires included the Job Content Questionnaire, Effort-Reward Imbalance Questionnaire, demography, psychosocial factors, health behaviors and work performance. Hierarchical logistic regression was applied to identify risk factors of depressive disorders. Multivariate linear regressions were used to determine factors affecting work performance. By hierarchical logistic regression, risk factors of depressive disorders are high demands, low work social support, high effort/reward ratio and low frequency of physical exercise. Combining the two job stress models may have better predictive power for depressive disorders than adopting either model alone. Three multivariate linear regressions provide similar results indicating that depressive disorders are associated with impaired work performance in terms of absence, role limitation and social functioning limitation. The results may provide insight into the applicability of job stress models in a globalized high-tech industry considerably focused in non-Western countries, and the design of workplace preventive strategies for depressive disorders in Asian electronics engineering population.
Loiselle, Christopher; Eby, Peter R.; Kim, Janice N.; Calhoun, Kristine E.; Allison, Kimberly H.; Gadi, Vijayakrishna K.; Peacock, Sue; Storer, Barry; Mankoff, David A.; Partridge, Savannah C.; Lehman, Constance D.
2014-01-01
Rationale and Objectives To test the ability of quantitative measures from preoperative Dynamic Contrast Enhanced MRI (DCE-MRI) to predict, independently and/or with the Katz pathologic nomogram, which breast cancer patients with a positive sentinel lymph node biopsy will have ≥ 4 positive axillary lymph nodes upon completion axillary dissection. Methods and Materials A retrospective review was conducted to identify clinically node-negative invasive breast cancer patients who underwent preoperative DCE-MRI, followed by sentinel node biopsy with positive findings and complete axillary dissection (6/2005 – 1/2010). Clinical/pathologic factors, primary lesion size and quantitative DCE-MRI kinetics were collected from clinical records and prospective databases. DCE-MRI parameters with univariate significance (p < 0.05) to predict ≥ 4 positive axillary nodes were modeled with stepwise regression and compared to the Katz nomogram alone and to a combined MRI-Katz nomogram model. Results Ninety-eight patients with 99 positive sentinel biopsies met study criteria. Stepwise regression identified DCE-MRI total persistent enhancement and volume adjusted peak enhancement as significant predictors of ≥4 metastatic nodes. Receiver operating characteristic (ROC) curves demonstrated an area under the curve (AUC) of 0.78 for the Katz nomogram, 0.79 for the DCE-MRI multivariate model, and 0.87 for the combined MRI-Katz model. The combined model was significantly more predictive than the Katz nomogram alone (p = 0.003). Conclusion Integration of DCE-MRI primary lesion kinetics significantly improved the Katz pathologic nomogram accuracy to predict presence of metastases in ≥ 4 nodes. DCE-MRI may help identify sentinel node positive patients requiring further localregional therapy. PMID:24331270
Dong, J Q; Zhang, X Y; Wang, S Z; Jiang, X F; Zhang, K; Ma, G W; Wu, M Q; Li, H; Zhang, H
2018-01-01
Plasma very low-density lipoprotein (VLDL) can be used to select for low body fat or abdominal fat (AF) in broilers, but its correlation with AF is limited. We investigated whether any other biochemical indicator can be used in combination with VLDL for a better selective effect. Nineteen plasma biochemical indicators were measured in male chickens from the Northeast Agricultural University broiler lines divergently selected for AF content (NEAUHLF) in the fed state at 46 and 48 d of age. The average concentration of every parameter for the 2 d was used for statistical analysis. Levels of these 19 plasma biochemical parameters were compared between the lean and fat lines. The phenotypic correlations between these plasma biochemical indicators and AF traits were analyzed. Then, multiple linear regression models were constructed to select the best model used for selecting against AF content. and the heritabilities of plasma indicators contained in the best models were estimated. The results showed that 11 plasma biochemical indicators (triglycerides, total bile acid, total protein, globulin, albumin/globulin, aspartate transaminase, alanine transaminase, gamma-glutamyl transpeptidase, uric acid, creatinine, and VLDL) differed significantly between the lean and fat lines (P < 0.01), and correlated significantly with AF traits (P < 0.05). The best multiple linear regression models based on albumin/globulin, VLDL, triglycerides, globulin, total bile acid, and uric acid, had higher R2 (0.73) than the model based only on VLDL (0.21). The plasma parameters included in the best models had moderate heritability estimates (0.21 ≤ h2 ≤ 0.43). These results indicate that these multiple linear regression models can be used to select for lean broiler chickens. © 2017 Poultry Science Association Inc.
Random regression analyses using B-splines to model growth of Australian Angus cattle
Meyer, Karin
2005-01-01
Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011
Predicting lettuce canopy photosynthesis with statistical and neural network models
NASA Technical Reports Server (NTRS)
Frick, J.; Precetti, C.; Mitchell, C. A.
1998-01-01
An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).
Bili, Eleni; Bili, Authors Eleni; Dampala, Kaliopi; Iakovou, Ioannis; Tsolakidis, Dimitrios; Giannakou, Anastasia; Tarlatzis, Basil C
2014-08-01
The aim of this study was to determine the performance of prostate specific antigen (PSA) and ultrasound parameters, such as ovarian volume and outline, in the diagnosis of polycystic ovary syndrome (PCOS). This prospective, observational, case-controlled study included 43 women with PCOS, and 40 controls. Between day 3 and 5 of the menstrual cycle, fasting serum samples were collected and transvaginal ultrasound was performed. The diagnostic performance of each parameter [total PSA (tPSA), total-to-free PSA ratio (tPSA:fPSA), ovarian volume, ovarian outline] was estimated by means of receiver operating characteristic (ROC) analysis, along with area under the curve (AUC), threshold, sensitivity, specificity as well as positive (+) and negative (-) likelihood ratios (LRs). Multivariate logistical regression models, using ovarian volume and ovarian outline, were constructed. The tPSA and tPSA:fPSA ratio resulted in AUC of 0.74 and 0.70, respectively, with moderate specificity/sensitivity and insufficient LR+/- values. In the multivariate logistic regression model, the combination of ovarian volume and outline had a sensitivity of 97.7% and a specificity of 97.5% in the diagnosis of PCOS, with +LR and -LR values of 39.1 and 0.02, respectively. In women with PCOS, tPSA and tPSA:fPSA ratio have similar diagnostic performance. The use of a multivariate logistic regression model, incorporating ovarian volume and outline, offers very good diagnostic accuracy in distinguishing women with PCOS patients from controls. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Beukinga, Roelof J; Hulshoff, Jan Binne; Mul, Véronique E M; Noordzij, Walter; Kats-Ugurlu, Gursah; Slart, Riemer H J A; Plukker, John T M
2018-06-01
Purpose To assess the value of baseline and restaging fluorine 18 ( 18 F) fluorodeoxyglucose (FDG) positron emission tomography (PET) radiomics in predicting pathologic complete response to neoadjuvant chemotherapy and radiation therapy (NCRT) in patients with locally advanced esophageal cancer. Materials and Methods In this retrospective study, 73 patients with histologic analysis-confirmed T1/N1-3/M0 or T2-4a/N0-3/M0 esophageal cancer were treated with NCRT followed by surgery (Chemoradiotherapy for Esophageal Cancer followed by Surgery Study regimen) between October 2014 and August 2017. Clinical variables and radiomic features from baseline and restaging 18 F-FDG PET were selected by univariable logistic regression and least absolute shrinkage and selection operator. The selected variables were used to fit a multivariable logistic regression model, which was internally validated by using bootstrap resampling with 20 000 replicates. The performance of this model was compared with reference prediction models composed of maximum standardized uptake value metrics, clinical variables, and maximum standardized uptake value at baseline NCRT radiomic features. Outcome was defined as complete versus incomplete pathologic response (tumor regression grade 1 vs 2-5 according to the Mandard classification). Results Pathologic response was complete in 16 patients (21.9%) and incomplete in 57 patients (78.1%). A prediction model combining clinical T-stage and restaging NCRT (post-NCRT) joint maximum (quantifying image orderliness) yielded an optimism-corrected area under the receiver operating characteristics curve of 0.81. Post-NCRT joint maximum was replaceable with five other redundant post-NCRT radiomic features that provided equal model performance. All reference prediction models exhibited substantially lower discriminatory accuracy. Conclusion The combination of clinical T-staging and quantitative assessment of post-NCRT 18 F-FDG PET orderliness (joint maximum) provided high discriminatory accuracy in predicting pathologic complete response in patients with esophageal cancer. © RSNA, 2018 Online supplemental material is available for this article.
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Shengzhi; Ming, Bo; Huang, Qiang
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less
Kosegarten, Carlos E; Ramírez-Corona, Nelly; Mani-López, Emma; Palou, Enrique; López-Malo, Aurelio
2017-01-02
A Box-Behnken design was used to determine the effect of protein concentration (0, 5, or 10g of casein/100g), fat (0, 3, or 6g of corn oil/100g), a w (0.900, 0.945, or 0.990), pH (3.5, 5.0, or 6.5), concentration of cinnamon essential oil (CEO, 0, 200, or 400μL/kg) and incubation temperature (15, 25, or 35°C) on the growth of Aspergillus flavus during 50days of incubation. Mold response under the evaluated conditions was modeled by the modified Gompertz equation, logistic regression, and time-to-detection model. The obtained polynomial regression models allow the significant coefficients (p<0.05) for linear, quadratic and interaction effects for the Gompertz equation's parameters to be identified, which adequately described (R 2 >0.967) the studied mold responses. After 50days of incubation, every tested model system was classified according to the observed response as 1 (growth) or 0 (no growth), then a binary logistic regression was utilized to model A. flavus growth interface, allowing to predict the probability of mold growth under selected combinations of tested factors. The time-to-detection model was utilized to estimate the time at which A. flavus visible growth begins. Water activity, temperature, and CEO concentration were the most important factors affecting fungal growth. It was observed that there is a range of possible combinations that may induce growth, such that incubation conditions and the amount of essential oil necessary for fungal growth inhibition strongly depend on protein and fat concentrations as well as on the pH of studied model systems. The probabilistic model and the time-to-detection models constitute another option to determine appropriate storage/processing conditions and accurately predict the probability and/or the time at which A. flavus growth occurs. Copyright © 2016 Elsevier B.V. All rights reserved.
2013-01-01
Background Developing countries in South Asia, such as Bangladesh, bear a disproportionate burden of diarrhoeal diseases such as Cholera, Typhoid and Paratyphoid. These seem to be aggravated by a number of social and environmental factors such as lack of access to safe drinking water, overcrowdedness and poor hygiene brought about by poverty. Some socioeconomic data can be obtained from census data whilst others are more difficult to elucidate. This study considers a range of both census data and spatial data from other sources, including remote sensing, as potential predictors of typhoid risk. Typhoid data are aggregated from hospital admission records for the period from 2005 to 2009. The spatial and statistical structures of the data are analysed and Principal Axis Factoring is used to reduce the degree of co-linearity in the data. The resulting factors are combined into a Quality of Life index, which in turn is used in a regression model of typhoid occurrence and risk. Results The three Principal Factors used together explain 87% of the variance in the initial candidate predictors, which eminently qualifies them for use as a set of uncorrelated explanatory variables in a linear regression model. Initial regression result using Ordinary Least Squares (OLS) were disappointing, this was explainable by analysis of the spatial autocorrelation inherent in the Principal factors. The use of Geographically Weighted Regression caused a considerable increase in the predictive power of regressions based on these factors. The best prediction, determined by analysis of the Akaike Information Criterion (AIC) was found when the three factors were combined into a quality of life index, using a method previously published by others, and had a coefficient of determination of 73%. Conclusions The typhoid occurrence/risk prediction equation was used to develop the first risk map showing areas of Dhaka Metropolitan Area whose inhabitants are at greater or lesser risk of typhoid infection. This, coupled with seasonal information on typhoid incidence also reported in this paper, has the potential to advise public health professionals on developing prevention strategies such as targeted vaccination. PMID:23497202
Corner, Robert J; Dewan, Ashraf M; Hashizume, Masahiro
2013-03-16
Developing countries in South Asia, such as Bangladesh, bear a disproportionate burden of diarrhoeal diseases such as cholera, typhoid and paratyphoid. These seem to be aggravated by a number of social and environmental factors such as lack of access to safe drinking water, overcrowdedness and poor hygiene brought about by poverty. Some socioeconomic data can be obtained from census data whilst others are more difficult to elucidate. This study considers a range of both census data and spatial data from other sources, including remote sensing, as potential predictors of typhoid risk. Typhoid data are aggregated from hospital admission records for the period from 2005 to 2009. The spatial and statistical structures of the data are analysed and principal axis factoring is used to reduce the degree of co-linearity in the data. The resulting factors are combined into a quality of life index, which in turn is used in a regression model of typhoid occurrence and risk. The three principal factors used together explain 87% of the variance in the initial candidate predictors, which eminently qualifies them for use as a set of uncorrelated explanatory variables in a linear regression model. Initial regression result using ordinary least squares (OLS) were disappointing, this was explainable by analysis of the spatial autocorrelation inherent in the principal factors. The use of geographically weighted regression caused a considerable increase in the predictive power of regressions based on these factors. The best prediction, determined by analysis of the Akaike information criterion (AIC) was found when the three factors were combined into a quality of life index, using a method previously published by others, and had a coefficient of determination of 73%. The typhoid occurrence/risk prediction equation was used to develop the first risk map showing areas of Dhaka metropolitan area whose inhabitants are at greater or lesser risk of typhoid infection. This, coupled with seasonal information on typhoid incidence also reported in this paper, has the potential to advise public health professionals on developing prevention strategies such as targeted vaccination.
Laboratory Headphone Studies of Human Response to Low-Amplitude Sonic Booms and Rattle Heard Indoors
NASA Technical Reports Server (NTRS)
Loubeau, Alexandra; Sullivan, Brenda M.; Klos, Jacob; Rathsam, Jonathan; Gavin, Joseph R.
2013-01-01
Human response to sonic booms heard indoors is affected by the generation of contact-induced rattle noise. The annoyance caused by sonic boom-induced rattle noise was studied in a series of psychoacoustics tests. Stimuli were divided into three categories and presented in three different studies: isolated rattles at the same calculated Perceived Level (PL), sonic booms combined with rattles with the mixed sound at a single PL, and sonic booms combined with rattles with the mixed sound at three different PL. Subjects listened to sounds over headphones and were asked to report their annoyance. Annoyance to different rattles was shown to vary significantly according to rattle object size. In addition, the combination of low-amplitude sonic booms and rattles can be more annoying than the sonic boom alone. Correlations and regression analyses for the combined sonic boom and rattle sounds identified the Moore and Glasberg Stationary Loudness (MGSL) metric as a primary predictor of annoyance for the tested sounds. Multiple linear regression models were developed to describe annoyance to the tested sounds, and simplifications for applicability to a wider range of sounds are presented.
Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R
2014-01-01
Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.
Jiang, Feng; Han, Ji-zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. PMID:29623088
Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
Use of ocean color scanner data in water quality mapping
NASA Technical Reports Server (NTRS)
Khorram, S.
1981-01-01
Remotely sensed data, in combination with in situ data, are used in assessing water quality parameters within the San Francisco Bay-Delta. The parameters include suspended solids, chlorophyll, and turbidity. Regression models are developed between each of the water quality parameter measurements and the Ocean Color Scanner (OCS) data. The models are then extended to the entire study area for mapping water quality parameters. The results include a series of color-coded maps, each pertaining to one of the water quality parameters, and the statistical analysis of the OCS data and regression models. It is found that concurrently collected OCS data and surface truth measurements are highly useful in mapping the selected water quality parameters and locating areas having relatively high biological activity. In addition, it is found to be virtually impossible, at least within this test site, to locate such areas on U-2 color and color-infrared photography.
Guo, Canyong; Luo, Xuefang; Zhou, Xiaohua; Shi, Beijia; Wang, Juanjuan; Zhao, Jinqi; Zhang, Xiaoxia
2017-06-05
Vibrational spectroscopic techniques such as infrared, near-infrared and Raman spectroscopy have become popular in detecting and quantifying polymorphism of pharmaceutics since they are fast and non-destructive. This study assessed the ability of three vibrational spectroscopy combined with multivariate analysis to quantify a low-content undesired polymorph within a binary polymorphic mixture. Partial least squares (PLS) regression and support vector machine (SVM) regression were employed to build quantitative models. Fusidic acid, a steroidal antibiotic, was used as the model compound. It was found that PLS regression performed slightly better than SVM regression in all the three spectroscopic techniques. Root mean square errors of prediction (RMSEP) were ranging from 0.48% to 1.17% for diffuse reflectance FTIR spectroscopy and 1.60-1.93% for diffuse reflectance FT-NIR spectroscopy and 1.62-2.31% for Raman spectroscopy. The results indicate that diffuse reflectance FTIR spectroscopy offers significant advantages in providing accurate measurement of polymorphic content in the fusidic acid binary mixtures, while Raman spectroscopy is the least accurate technique for quantitative analysis of polymorphs. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Parmar, Kulwinder Singh
2016-03-01
This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.
Factor regression for interpreting genotype-environment interaction in bread-wheat trials.
Baril, C P
1992-05-01
The French INRA wheat (Triticum aestivum L. em Thell.) breeding program is based on multilocation trials to produce high-yielding, adapted lines for a wide range of environments. Differential genotypic responses to variable environment conditions limit the accuracy of yield estimations. Factor regression was used to partition the genotype-environment (GE) interaction into four biologically interpretable terms. Yield data were analyzed from 34 wheat genotypes grown in four environments using 12 auxiliary agronomic traits as genotypic and environmental covariates. Most of the GE interaction (91%) was explained by the combination of only three traits: 1,000-kernel weight, lodging susceptibility and spike length. These traits are easily measured in breeding programs, therefore factor regression model can provide a convenient and useful prediction method of yield.
Duan, Liwei; Zhang, Sheng; Lin, Zhaofen
2017-02-01
To explore the method and performance of using multiple indices to diagnose sepsis and to predict the prognosis of severe ill patients. Critically ill patients at first admission to intensive care unit (ICU) of Changzheng Hospital, Second Military Medical University, from January 2014 to September 2015 were enrolled if the following conditions were satisfied: (1) patients were 18-75 years old; (2) the length of ICU stay was more than 24 hours; (3) All records of the patients were available. Data of the patients was collected by searching the electronic medical record system. Logistic regression model was formulated to create the new combined predictive indicator and the receiver operating characteristic (ROC) curve for the new predictive indicator was built. The area under the ROC curve (AUC) for both the new indicator and original ones were compared. The optimal cut-off point was obtained where the Youden index reached the maximum value. Diagnostic parameters such as sensitivity, specificity and predictive accuracy were also calculated for comparison. Finally, individual values were substituted into the equation to test the performance in predicting clinical outcomes. A total of 362 patients (218 males and 144 females) were enrolled in our study and 66 patients died. The average age was (48.3±19.3) years old. (1) For the predictive model only containing categorical covariants [including procalcitonin (PCT), lipopolysaccharide (LPS), infection, white blood cells count (WBC) and fever], increased PCT, increased WBC and fever were demonstrated to be independent risk factors for sepsis in the logistic equation. The AUC for the new combined predictive indicator was higher than that of any other indictor, including PCT, LPS, infection, WBC and fever (0.930 vs. 0.661, 0.503, 0.570, 0.837, 0.800). The optimal cut-off value for the new combined predictive indicator was 0.518. Using the new indicator to diagnose sepsis, the sensitivity, specificity and diagnostic accuracy rate were 78.00%, 93.36% and 87.47%, respectively. One patient was randomly selected, and the clinical data was substituted into the probability equation for prediction. The calculated value was 0.015, which was less than the cut-off value (0.518), indicating that the prognosis was non-sepsis at an accuracy of 87.47%. (2) For the predictive model only containing continuous covariants, the logistic model which combined acute physiology and chronic health evaluation II (APACHE II) score and sequential organ failure assessment (SOFA) score to predict in-hospital death events, both APACHE II score and SOFA score were independent risk factors for death. The AUC for the new predictive indicator was higher than that of APACHE II score and SOFA score (0.834 vs. 0.812, 0.813). The optimal cut-off value for the new combined predictive indicator in predicting in-hospital death events was 0.236, and the corresponding sensitivity, specificity and diagnostic accuracy for the combined predictive indicator were 73.12%, 76.51% and 75.70%, respectively. One patient was randomly selected, and the APACHE II score and SOFA score was substituted into the probability equation for prediction. The calculated value was 0.570, which was higher than the cut-off value (0.236), indicating that the death prognosis at an accuracy of 75.70%. The combined predictive indicator, which is formulated by logistic regression models, is superior to any single indicator in predicting sepsis or in-hospital death events.
Meng, Xia; Fu, Qingyan; Ma, Zongwei; Chen, Li; Zou, Bin; Zhang, Yan; Xue, Wenbo; Wang, Jinnan; Wang, Dongfang; Kan, Haidong; Liu, Yang
2016-01-01
Development of exposure assessment model is the key component for epidemiological studies concerning air pollution, but the evidence from China is limited. Therefore, a linear mixed effects (LME) model was established in this study in a Chinese metropolis by incorporating aerosol optical depth (AOD), meteorological information and the land use regression (LUR) model to predict ground PM10 levels on high spatiotemporal resolution. The cross validation (CV) R(2) and the RMSE of the LME model were 0.87 and 19.2 μg/m(3), respectively. The relative prediction error (RPE) of daily and annual mean predicted PM10 concentrations were 19.1% and 7.5%, respectively. This study was the first attempt in China to estimate both short-term and long-term variation of PM10 levels with high spatial resolution in a Chinese metropolis with the LME model. The results suggested that the LME model could provide exposure assessment for short-term and long-term epidemiological studies in China. Copyright © 2015 Elsevier Ltd. All rights reserved.
Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.
Leung, Denis H Y; Wang, You-Gan; Zhu, Min
2009-07-01
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey
2014-04-15
In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.
Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo
2015-10-01
The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p < 0.01). The significant correlated variables for %eGFR alteration were %RCV preservation (r = 0.58, p < 0.01) and %RPV preservation (r = 0.54, p < 0.01). We developed our regression model as follows: postoperative eGFR = 57.87 - 0.55(age) - 15.01(body surface area) + 0.30(preoperative eGFR) + 52.92(%RCV preservation). Strong correlation was seen between postoperative eGFR and the calculated estimation model (r = 0.83; p < 0.001). The external validation cohort (n = 21) showed our model outperformed previously reported models. Combining MDCT renal volumetry and clinical indices might yield an important tool for predicting postoperative renal function.
NASA Astrophysics Data System (ADS)
Shi, Liangliang; Mao, Zhihua; Wang, Zheng
2018-02-01
Satellite imagery has played an important role in monitoring water quality of lakes or coastal waters presently, but scarcely been applied in inland rivers. This paper presents an attempt of feasibility to apply regression model to quantify and map the concentrations of total suspended matter (CTSM) in inland rivers which have a large scale of spatial and a high CTSM dynamic range by using high resolution satellite remote sensing data, WorldView-2. An empirical approach to quantify CTSM by integrated use of high resolution WorldView-2 multispectral data and 21 in situ CTSM measurements. Radiometric correction, geometric and atmospheric correction involved in image processing procedure is carried out for deriving the surface reflectance to correlate the CTSM and satellite data by using single-variable and multivariable regression technique. Results of regression model show that the single near-infrared (NIR) band 8 of WorldView-2 have a relative strong relationship (R2=0.93) with CTSM. Different prediction models were developed on various combinations of WorldView-2 bands, the Akaike Information Criteria approach was used to choose the best model. The model involving band 1, 3, 5, and 8 of WorldView-2 had a best performance, whose R2 reach to 0.92, with SEE of 53.30 g/m3. The spatial distribution maps were produced by using the best multiple regression model. The results of this paper indicated that it is feasible to apply the empirical model by using high resolution satellite imagery to retrieve CTSM of inland rivers in routine monitoring of water quality.
J. Michael Bowker; D. Murphy; H. Ken Cordell; Donald B.K. English; J.C. Bergstrom; C.M. Starbuck; C.J. Betz; G.T. Green
2006-01-01
This paper explores the influence of demographic and spatial variables on individual participation and consumption of wildland area recreation. Data from the National Survey on Recreation and the Environment are combined with geographical information systembased distance measures to develop nonlinear regression models used to predict both participation and the number...
J Cerqueira, Rui; Melo, Renata; Moreira, Soraia; A Saraiva, Francisca; Andrade, Marta; Salgueiro, Elson; Almeida, Jorge; J Amorim, Mário; Pinho, Paulo; Lourenço, André; F Leite-Moreira, Adelino
2017-01-01
To compare stentless Freedom Solo and stented Trifecta aortic bioprostheses regarding hemodynamic profile, left ventricular mass regression, early and late postoperative outcomes and survival. Longitudinal cohort study of consecutive patients undergoing aortic valve replacement (from 2009 to 2016) with either Freedom Solo or Trifecta at one centre. Local databases and national records were queried. Postoperative echocardiography (3-6 months) was obtained for hemodynamic profile (mean transprosthetic gradient and effective orifice area) and left ventricle mass determination. After propensity score matching (21 covariates), Kaplan-Meier analysis and cumulative incidence analysis were performed for survival and combined outcome of structural valve deterioration and endocarditis, respectively. Hemodynamics and left ventricle mass regression were assessed by a mixed- -effects model including propensity score as a covariate. From a total sample of 397 Freedom Solo and 525 Trifecta patients with a median follow-up time of 4.0 (2.2- 6.0) and 2.4 (1.4-3.7) years, respectively, a matched sample of 329 pairs was obtained. Well-balanced matched groups showed no difference in survival (hazard ratio=1.04, 95% confidence interval=0.69-1.56) or cumulative hazards of combined outcome (subhazard ratio=0.54, 95% confidence interval=0.21-1.39). Although Trifecta showed improved hemodynamic profile compared to Freedom Solo, no differences were found in left ventricle mass regression. Trifecta has a slightly improved hemodynamic profile compared to Freedom Solo but this does not translate into differences in the extent of mass regression, postoperative outcomes or survival, which were good and comparable for both bioprostheses. Long-term follow-up is needed for comparisons with older models of bioprostheses.
Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B
2018-04-01
Objective Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.
Zhu, Hongyan; Chu, Bingquan; Fan, Yangyang; Tao, Xiaoya; Yin, Wenxin; He, Yong
2017-08-10
We investigated the feasibility and potentiality of determining firmness, soluble solids content (SSC), and pH in kiwifruits using hyperspectral imaging, combined with variable selection methods and calibration models. The images were acquired by a push-broom hyperspectral reflectance imaging system covering two spectral ranges. Weighted regression coefficients (BW), successive projections algorithm (SPA) and genetic algorithm-partial least square (GAPLS) were compared and evaluated for the selection of effective wavelengths. Moreover, multiple linear regression (MLR), partial least squares regression and least squares support vector machine (LS-SVM) were developed to predict quality attributes quantitatively using effective wavelengths. The established models, particularly SPA-MLR, SPA-LS-SVM and GAPLS-LS-SVM, performed well. The SPA-MLR models for firmness (R pre = 0.9812, RPD = 5.17) and SSC (R pre = 0.9523, RPD = 3.26) at 380-1023 nm showed excellent performance, whereas GAPLS-LS-SVM was the optimal model at 874-1734 nm for predicting pH (R pre = 0.9070, RPD = 2.60). Image processing algorithms were developed to transfer the predictive model in every pixel to generate prediction maps that visualize the spatial distribution of firmness and SSC. Hence, the results clearly demonstrated that hyperspectral imaging has the potential as a fast and non-invasive method to predict the quality attributes of kiwifruits.
Assessing NARCCAP climate model effects using spatial confidence regions
French, Joshua P.; McGinnis, Seth; Schwartzman, Armin
2017-01-01
We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference. PMID:28936474
Parameter estimation in Cox models with missing failure indicators and the OPPERA study.
Brownstein, Naomi C; Cai, Jianwen; Slade, Gary D; Bair, Eric
2015-12-30
In a prospective cohort study, examining all participants for incidence of the condition of interest may be prohibitively expensive. For example, the "gold standard" for diagnosing temporomandibular disorder (TMD) is a physical examination by a trained clinician. In large studies, examining all participants in this manner is infeasible. Instead, it is common to use questionnaires to screen for incidence of TMD and perform the "gold standard" examination only on participants who screen positively. Unfortunately, some participants may leave the study before receiving the "gold standard" examination. Within the framework of survival analysis, this results in missing failure indicators. Motivated by the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study, a large cohort study of TMD, we propose a method for parameter estimation in survival models with missing failure indicators. We estimate the probability of being an incident case for those lacking a "gold standard" examination using logistic regression. These estimated probabilities are used to generate multiple imputations of case status for each missing examination that are combined with observed data in appropriate regression models. The variance introduced by the procedure is estimated using multiple imputation. The method can be used to estimate both regression coefficients in Cox proportional hazard models as well as incidence rates using Poisson regression. We simulate data with missing failure indicators and show that our method performs as well as or better than competing methods. Finally, we apply the proposed method to data from the OPPERA study. Copyright © 2015 John Wiley & Sons, Ltd.
Multi-model ensemble combinations of the water budget in the East/Japan Sea
NASA Astrophysics Data System (ADS)
HAN, S.; Hirose, N.; Usui, N.; Miyazawa, Y.
2016-02-01
The water balance of East/Japan Sea is determined mainly by inflow and outflow through the Korea/Tsushima, Tsugaru and Soya/La Perouse Straits. However, the volume transports measured at three straits remain quantitatively unbalanced. This study examined the seasonal variation of the volume transport using the multiple linear regression and ridge regression of multi-model ensemble (MME) methods to estimate physically consistent circulation in East/Japan Sea by using four different data assimilation models. The MME outperformed all of the single models by reducing uncertainties, especially the multicollinearity problem with the ridge regression. However, the regression constants turned out to be inconsistent with each other if the MME was applied separately for each strait. The MME for a connected system was thus performed to find common constants for these straits. The estimation of this MME was found to be similar to the MME result of sea level difference (SLD). The estimated mean transport (2.42 Sv) was smaller than the measurement data at the Korea/Tsushima Strait, but the calibrated transport of the Tsugaru Strait (1.63 Sv) was larger than the observed data. The MME results of transport and SLD also suggested that the standard deviation (STD) of the Korea/Tsushima Strait is larger than the STD of the observation, whereas the estimated results were almost identical to that observed for the Tsugaru and Soya/La Perouse Straits. The similarity between MME results enhances the reliability of the present MME estimation.
Effect of combined topical heparin and steroid on corneal neovascularization in children.
Michels, Rike; Michels, Stephan; Kaminski, Stephan
2012-01-01
To demonstrate the effect of topical heparin combined with topical steroid on corneal neovascularization (CN) in children. Four children (5 eyes) with new-onset progressive CN in at least one eye received topical rimexolone or dexamethasone in combination with heparin until complete regression of CN was obtained. The regression of CN was documented by slit-lamp or anterior segment photography. All 5 eyes showed complete regression of CN within 5 months. An anti-angiogenic effect was found as early as 1 week after starting topical combination treatment. No ocular and systemic side effects were detected and treatment was well tolerated by all children. In the 3 eyes with involvement of the optical axis, symmetrical visual acuity was obtained by amblyopia treatment. Recurrence of the CN was detectable in 2 eyes at 1 and 6 months, respectively, after ending combination therapy. Both eyes responded favorably to re-treatment. Combination of topical heparin and steroid leads to rapid regression and complete inactivity of CN. This therapeutic approach is promising, especially in children with limited therapeutic alternatives and a high risk for amblyopia. Copyright 2012, SLACK Incorporated.
Lafuente, Victoria; Herrera, Luis J; Pérez, María del Mar; Val, Jesús; Negueruela, Ignacio
2015-08-15
In this work, near infrared spectroscopy (NIR) and an acoustic measure (AWETA) (two non-destructive methods) were applied in Prunus persica fruit 'Calrico' (n = 260) to predict Magness-Taylor (MT) firmness. Separate and combined use of these measures was evaluated and compared using partial least squares (PLS) and least squares support vector machine (LS-SVM) regression methods. Also, a mutual-information-based variable selection method, seeking to find the most significant variables to produce optimal accuracy of the regression models, was applied to a joint set of variables (NIR wavelengths and AWETA measure). The newly proposed combined NIR-AWETA model gave good values of the determination coefficient (R(2)) for PLS and LS-SVM methods (0.77 and 0.78, respectively), improving the reliability of MT firmness prediction in comparison with separate NIR and AWETA predictions. The three variables selected by the variable selection method (AWETA measure plus NIR wavelengths 675 and 697 nm) achieved R(2) values 0.76 and 0.77, PLS and LS-SVM. These results indicated that the proposed mutual-information-based variable selection algorithm was a powerful tool for the selection of the most relevant variables. © 2014 Society of Chemical Industry.
Abdominal Circumference Versus Body Mass Index as Predictors of Lower Extremity Overuse Injury Risk.
Nye, Nathaniel S; Kafer, Drew S; Olsen, Cara; Carnahan, David H; Crawford, Paul F
2018-02-01
Abdominal circumference (AC) is superior to body mass index (BMI) as a measure of risk for various health outcomes. Our objective was to compare AC and BMI as predictors of lower extremity overuse injury (LEOI) risk. Retrospective review of electronic medical records of 79,868 US Air Force personnel over a 7-year period (2005-2011) for incidence of new LEOI. Subjects were stratified by BMI and AC. Injury risk for BMI/AC subgroups was calculated using Kaplan-Meier curves and Cox proportional-hazards regression. Receiver operating characteristic curves with area under the curve were used to compare each model's predictive value. Cox proportional-hazards regression showed significant risk association between elevated BMI, AC, and all injury types, with hazard ratios ranging 1.230-3.415 for obese versus normal BMI and 1.665-3.893 for high-risk versus low-risk AC (P < .05 for all measures). Receiver operating characteristic curves with area under the curve showed equivalent performance between BMI and AC for predicting all injury types. However, the combined model (AC and BMI) showed improved predictive ability over either model alone for joint injury, overall LEOI, and most strongly for osteoarthritis. Although AC and BMI alone performed similarly well, a combined approach using BMI and AC together improved risk estimation for LEOI.
Herlea-Pana, Oana; Yao, Longbiao; Heuser-Baker, Janet; Wang, Qiongxin; Wang, Qilong; Georgescu, Constantin; Zou, Ming-Hui; Barlic-Dicen, Jana
2015-05-01
Atherosclerosis manifests itself as arterial plaques, which lead to heart attacks or stroke. Treatments supporting plaque regression are therefore aggressively pursued. Studies conducted in models in which hypercholesterolaemia is reversible, such as the Reversa mouse model we have employed in the current studies, will be instrumental for the development of such interventions. Using this model, we have shown that advanced atherosclerosis regression occurs when lipid lowering is used in combination with bone-marrow endothelial progenitor cell (EPC) treatment. However, it remains unclear how EPCs home to regressing plaques and how they augment atherosclerosis reversal. Here we identify molecules that support functional responses of EPCs during plaque resolution. Chemokines CXCL1 and CX3CL1 were detected in the vascular wall of atheroregressing Reversa mice, and their cognate receptors CXCR2 and CX3CR1 were observed on adoptively transferred EPCs in circulation. We tested whether CXCL1-CXCR2 and CX3CL1-CX3CR1 axes regulate functional responses of EPCs during plaque reversal. We show that pharmacological inhibition of CXCR2 or CX3CR1, or genetic inactivation of these two chemokine receptors interfered with EPC-mediated advanced atherosclerosis regression. We also demonstrate that CXCR2 directs EPCs to regressing plaques while CX3CR1 controls a paracrine function(s) of these cells. CXCR2 and CX3CR1 differentially regulate EPC functional responses during atheroregression. Our study improves understanding of how chemokines and chemokine receptors regulate plaque resolution, which could determine the effectiveness of interventions reducing complications of atherosclerosis. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Herlea-Pana, Oana; Yao, Longbiao; Heuser-Baker, Janet; Wang, Qiongxin; Wang, Qilong; Georgescu, Constantin; Zou, Ming-Hui; Barlic-Dicen, Jana
2015-01-01
Aims Atherosclerosis manifests itself as arterial plaques, which lead to heart attacks or stroke. Treatments supporting plaque regression are therefore aggressively pursued. Studies conducted in models in which hypercholesterolaemia is reversible, such as the Reversa mouse model we have employed in the current studies, will be instrumental for the development of such interventions. Using this model, we have shown that advanced atherosclerosis regression occurs when lipid lowering is used in combination with bone-marrow endothelial progenitor cell (EPC) treatment. However, it remains unclear how EPCs home to regressing plaques and how they augment atherosclerosis reversal. Here we identify molecules that support functional responses of EPCs during plaque resolution. Methods and results Chemokines CXCL1 and CX3CL1 were detected in the vascular wall of atheroregressing Reversa mice, and their cognate receptors CXCR2 and CX3CR1 were observed on adoptively transferred EPCs in circulation. We tested whether CXCL1–CXCR2 and CX3CL1–CX3CR1 axes regulate functional responses of EPCs during plaque reversal. We show that pharmacological inhibition of CXCR2 or CX3CR1, or genetic inactivation of these two chemokine receptors interfered with EPC-mediated advanced atherosclerosis regression. We also demonstrate that CXCR2 directs EPCs to regressing plaques while CX3CR1 controls a paracrine function(s) of these cells. Conclusion CXCR2 and CX3CR1 differentially regulate EPC functional responses during atheroregression. Our study improves understanding of how chemokines and chemokine receptors regulate plaque resolution, which could determine the effectiveness of interventions reducing complications of atherosclerosis. PMID:25765938
NASA Astrophysics Data System (ADS)
Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah
2017-08-01
In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.
NASA Astrophysics Data System (ADS)
Ishizaki, N. N.; Dairaku, K.; Ueno, G.
2016-12-01
We have developed a statistical downscaling method for estimating probabilistic climate projection using CMIP5 multi general circulation models (GCMs). A regression model was established so that the combination of weights of GCMs reflects the characteristics of the variation of observations at each grid point. Cross validations were conducted to select GCMs and to evaluate the regression model to avoid multicollinearity. By using spatially high resolution observation system, we conducted statistically downscaled probabilistic climate projections with 20-km horizontal grid spacing. Root mean squared errors for monthly mean air surface temperature and precipitation estimated by the regression method were the smallest compared with the results derived from a simple ensemble mean of GCMs and a cumulative distribution function based bias correction method. Projected changes in the mean temperature and precipitation were basically similar to those of the simple ensemble mean of GCMs. Mean precipitation was generally projected to increase associated with increased temperature and consequent increased moisture content in the air. Weakening of the winter monsoon may affect precipitation decrease in some areas. Temperature increase in excess of 4 K was expected in most areas of Japan in the end of 21st century under RCP8.5 scenario. The estimated probability of monthly precipitation exceeding 300 mm would increase around the Pacific side during the summer and the Japan Sea side during the winter season. This probabilistic climate projection based on the statistical method can be expected to bring useful information to the impact studies and risk assessments.
Logistic regression modeling to assess groundwater vulnerability to contamination in Hawaii, USA.
Mair, Alan; El-Kadi, Aly I
2013-10-01
Capture zone analysis combined with a subjective susceptibility index is currently used in Hawaii to assess vulnerability to contamination of drinking water sources derived from groundwater. In this study, we developed an alternative objective approach that combines well capture zones with multiple-variable logistic regression (LR) modeling and applied it to the highly-utilized Pearl Harbor and Honolulu aquifers on the island of Oahu, Hawaii. Input for the LR models utilized explanatory variables based on hydrogeology, land use, and well geometry/location. A suite of 11 target contaminants detected in the region, including elevated nitrate (>1 mg/L), four chlorinated solvents, four agricultural fumigants, and two pesticides, was used to develop the models. We then tested the ability of the new approach to accurately separate groups of wells with low and high vulnerability, and the suitability of nitrate as an indicator of other types of contamination. Our results produced contaminant-specific LR models that accurately identified groups of wells with the lowest/highest reported detections and the lowest/highest nitrate concentrations. Current and former agricultural land uses were identified as significant explanatory variables for eight of the 11 target contaminants, while elevated nitrate was a significant variable for five contaminants. The utility of the combined approach is contingent on the availability of hydrologic and chemical monitoring data for calibrating groundwater and LR models. Application of the approach using a reference site with sufficient data could help identify key variables in areas with similar hydrogeology and land use but limited data. In addition, elevated nitrate may also be a suitable indicator of groundwater contamination in areas with limited data. The objective LR modeling approach developed in this study is flexible enough to address a wide range of contaminants and represents a suitable addition to the current subjective approach. © 2013 Elsevier B.V. All rights reserved.
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
NASA Astrophysics Data System (ADS)
Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao
2018-02-01
An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.
Held, Elizabeth; Cape, Joshua; Tintle, Nathan
2016-01-01
Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.
Combined analysis of magnetic and gravity anomalies using normalized source strength (NSS)
NASA Astrophysics Data System (ADS)
Li, L.; Wu, Y.
2017-12-01
Gravity field and magnetic field belong to potential fields which lead inherent multi-solution. Combined analysis of magnetic and gravity anomalies based on Poisson's relation is used to determinate homology gravity and magnetic anomalies and decrease the ambiguity. The traditional combined analysis uses the linear regression of the reduction to pole (RTP) magnetic anomaly to the first order vertical derivative of the gravity anomaly, and provides the quantitative or semi-quantitative interpretation by calculating the correlation coefficient, slope and intercept. In the calculation process, due to the effect of remanent magnetization, the RTP anomaly still contains the effect of oblique magnetization. In this case the homology gravity and magnetic anomalies display irrelevant results in the linear regression calculation. The normalized source strength (NSS) can be transformed from the magnetic tensor matrix, which is insensitive to the remanence. Here we present a new combined analysis using NSS. Based on the Poisson's relation, the gravity tensor matrix can be transformed into the pseudomagnetic tensor matrix of the direction of geomagnetic field magnetization under the homologous condition. The NSS of pseudomagnetic tensor matrix and original magnetic tensor matrix are calculated and linear regression analysis is carried out. The calculated correlation coefficient, slope and intercept indicate the homology level, Poisson's ratio and the distribution of remanent respectively. We test the approach using synthetic model under complex magnetization, the results show that it can still distinguish the same source under the condition of strong remanence, and establish the Poisson's ratio. Finally, this approach is applied in China. The results demonstrated that our approach is feasible.
Lee, Soo-Jeong; Lee, Joung Hee; Gillen, Marion; Krause, Niklas
2014-02-01
The aims of this study were to compare job demand-control (JDC) and effort-reward imbalance (ERI) models in examining the association of job stress with work-related musculoskeletal symptoms and to evaluate the utility of a combined model. This study analyzed cross-sectional survey data obtained from a nationwide random sample of 304 intensive-care unit (ICU) nurses. Demographic and job factors were controlled in the analyses using logistic regression. Both JDC and ERI variables had strong and statistically significant associations with work-related musculoskeletal symptoms. Effort-reward imbalance had stronger associations than job strain or iso-strain with musculoskeletal symptoms. Effort-reward imbalance alone showed similar or stronger associations with musculoskeletal symptoms compared to combined variables of the JDC and ERI models. The ERI model appears to capture the magnitude of the musculoskeletal health risk among nurses associated with job stress at least as well and possibly better than the JDC model. Our findings suggest that combining the two models provides little gain compared to using effort-reward imbalance only. © 2013 Wiley Periodicals, Inc.
Tate, Sonya C; Burke, Teresa F; Hartman, Daisy; Kulanthaivel, Palaniappan; Beckmann, Richard P; Cronier, Damien M
2016-03-15
Resistance to BRAF inhibition is a major cause of treatment failure for BRAF-mutated metastatic melanoma patients. Abemaciclib, a cyclin-dependent kinase 4 and 6 inhibitor, overcomes this resistance in xenograft tumours and offers a promising drug combination. The present work aims to characterise the quantitative pharmacology of the abemaciclib/vemurafenib combination using a semimechanistic pharmacokinetic/pharmacodynamic modelling approach and to identify an optimum dosing regimen for potential clinical evaluation. A PK/biomarker model was developed to connect abemaciclib/vemurafenib concentrations to changes in MAPK and cell cycle pathway biomarkers in A375 BRAF-mutated melanoma xenografts. Resultant tumour growth inhibition was described by relating (i) MAPK pathway inhibition to apoptosis, (ii) mitotic cell density to tumour growth and, under resistant conditions, (iii) retinoblastoma protein inhibition to cell survival. The model successfully described vemurafenib/abemaciclib-mediated changes in MAPK pathway and cell cycle biomarkers. Initial tumour shrinkage by vemurafenib, acquisition of resistance and subsequent abemaciclib-mediated efficacy were successfully captured and externally validated. Model simulations illustrate the benefit of intermittent vemurafenib therapy over continuous treatment, and indicate that continuous abemaciclib in combination with intermittent vemurafenib offers the potential for considerable tumour regression. The quantitative pharmacology of the abemaciclib/vemurafenib combination was successfully characterised and an optimised, clinically-relevant dosing strategy was identified.
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
NASA Astrophysics Data System (ADS)
Xin, Pei; Wang, Shen S. J.; Shen, Chengji; Zhang, Zeyu; Lu, Chunhui; Li, Ling
2018-03-01
Shallow groundwater interacts strongly with surface water across a quarter of global land area, affecting significantly the terrestrial eco-hydrology and biogeochemistry. We examined groundwater behavior subjected to unimodal impulse and irregular surface water fluctuations, combining physical experiments, numerical simulations, and functional data analysis. Both the experiments and numerical simulations demonstrated a damped and delayed response of groundwater table to surface water fluctuations. To quantify this hysteretic shallow groundwater behavior, we developed a regression model with the Gamma distribution functions adopted to account for the dependence of groundwater behavior on antecedent surface water conditions. The regression model fits and predicts well the groundwater table oscillations resulting from propagation of irregular surface water fluctuations in both laboratory and large-scale aquifers. The coefficients of the Gamma distribution function vary spatially, reflecting the hysteresis effect associated with increased amplitude damping and delay as the fluctuation propagates. The regression model, in a relatively simple functional form, has demonstrated its capacity of reproducing high-order nonlinear effects that underpin the surface water and groundwater interactions. The finding has important implications for understanding and predicting shallow groundwater behavior and associated biogeochemical processes, and will contribute broadly to studies of groundwater-dependent ecology and biogeochemistry.
NASA Astrophysics Data System (ADS)
Brümmer, C.; Moffat, A. M.; Huth, V.; Augustin, J.; Herbst, M.; Kutsch, W. L.
2016-12-01
Manual carbon dioxide flux measurements with closed chambers at scheduled campaigns are a versatile method to study management effects at small scales in multiple-plot experiments. The eddy covariance technique has the advantage of quasi-continuous measurements but requires large homogeneous areas of a few hectares. To evaluate the uncertainties associated with interpolating from individual campaigns to the whole vegetation period, we installed both techniques at an agricultural site in Northern Germany. The presented comparison covers two cropping seasons, winter oilseed rape in 2012/13 and winter wheat in 2013/14. Modeling half-hourly carbon fluxes from campaigns is commonly performed based on non-linear regressions for the light response and respiration. The daily averages of net CO2 modeled from chamber data deviated from eddy covariance measurements in the range of ± 5 g C m-2 day-1. To understand the observed differences and to disentangle the effects, we performed four additional setups (expert versus default settings of the non-linear regressions based algorithm, purely empirical modeling with artificial neural networks versus non-linear regressions, cross-validating using eddy covariance measurements as campaign fluxes, weekly versus monthly scheduling of campaigns) to model the half-hourly carbon fluxes for the whole vegetation period. The good agreement of the seasonal course of net CO2 at plot and field scale for our agricultural site demonstrates that both techniques are robust and yield consistent results at seasonal time scale even for a managed ecosystem with high temporal dynamics in the fluxes. This allows combining the respective advantages of factorial experiments at plot scale with dense time series data at field scale. Furthermore, the information from the quasi-continuous eddy covariance measurements can be used to derive vegetation proxies to support the interpolation of carbon fluxes in-between the manual chamber campaigns.
Zhuo, Lin; Tao, Hong; Wei, Hong; Chengzhen, Wu
2016-01-01
We tried to establish compatible carbon content models of individual trees for a Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) plantation from Fujian province in southeast China. In general, compatibility requires that the sum of components equal the whole tree, meaning that the sum of percentages calculated from component equations should equal 100%. Thus, we used multiple approaches to simulate carbon content in boles, branches, foliage leaves, roots and the whole individual trees. The approaches included (i) single optimal fitting (SOF), (ii) nonlinear adjustment in proportion (NAP) and (iii) nonlinear seemingly unrelated regression (NSUR). These approaches were used in combination with variables relating diameter at breast height (D) and tree height (H), such as D, D2H, DH and D&H (where D&H means two separate variables in bivariate model). Power, exponential and polynomial functions were tested as well as a new general function model was proposed by this study. Weighted least squares regression models were employed to eliminate heteroscedasticity. Model performances were evaluated by using mean residuals, residual variance, mean square error and the determination coefficient. The results indicated that models with two dimensional variables (DH, D2H and D&H) were always superior to those with a single variable (D). The D&H variable combination was found to be the most useful predictor. Of all the approaches, SOF could establish a single optimal model separately, but there were deviations in estimating results due to existing incompatibilities, while NAP and NSUR could ensure predictions compatibility. Simultaneously, we found that the new general model had better accuracy than others. In conclusion, we recommend that the new general model be used to estimate carbon content for Chinese fir and considered for other vegetation types as well. PMID:26982054
Zhao, Zeng-hui; Wang, Wei-ming; Gao, Xin; Yan, Ji-xing
2013-01-01
According to the geological characteristics of Xinjiang Ili mine in western area of China, a physical model of interstratified strata composed of soft rock and hard coal seam was established. Selecting the tunnel position, deformation modulus, and strength parameters of each layer as influencing factors, the sensitivity coefficient of roadway deformation to each parameter was firstly analyzed based on a Mohr-Columb strain softening model and nonlinear elastic-plastic finite element analysis. Then the effect laws of influencing factors which showed high sensitivity were further discussed. Finally, a regression model for the relationship between roadway displacements and multifactors was obtained by equivalent linear regression under multiple factors. The results show that the roadway deformation is highly sensitive to the depth of coal seam under the floor which should be considered in the layout of coal roadway; deformation modulus and strength of coal seam and floor have a great influence on the global stability of tunnel; on the contrary, roadway deformation is not sensitive to the mechanical parameters of soft roof; roadway deformation under random combinations of multi-factors can be deduced by the regression model. These conclusions provide theoretical significance to the arrangement and stability maintenance of coal roadway. PMID:24459447
Khavanin, Nima; Jordan, Sumanas W; Vieira, Brittany L; Hume, Keith M; Mlodinow, Alexei S; Simmons, Christopher J; Murphy, Robert X; Gutowski, Karol A; Kim, John Y S
2015-11-01
Combined abdominal and breast surgery presents a convenient and relatively cost-effective approach for accomplishing both procedures. This study is the largest to date assessing the safety of combined procedures, and it aims to develop a simple pretreatment risk stratification method for patients who desire a combined procedure. All women undergoing abdominoplasty, panniculectomy, augmentation mammaplasty, and/or mastopexy in the TOPS database were identified. Demographics and outcomes for combined procedures were compared to individual procedures using χ(2) and Student's t-tests. Multiple logistic regression provided adjusted odds ratios for the effect of a combined procedure on 30-day complications. Among combined procedures, a logistic regression model determined point values for pretreatment risk factors including diabetes (1 point), age over 53 (1), obesity (2), and 3+ ASA status (3), creating a 7-point pretreatment risk stratification tool. A total of 58,756 cases met inclusion criteria. Complication rates among combined procedures (9.40%) were greater than those of aesthetic breast surgery (2.66%; P < .001) but did not significantly differ from abdominal procedures (9.75%; P = .530). Nearly 77% of combined cases were classified as low-risk (0 points total) with a 9.78% complication rates. Medium-risk patients (1 to 3 points) had a 16.63% complication rate, and high-risk (4 to 7 points) 38.46%. Combining abdominal and breast procedures is safe in the majority of patients and does not increase 30-day complications rates. The risk stratification tool can continue to ensure favorable outcomes for patients who may desire a combined surgery. 4 Risk. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.
Shi, Yuanyuan; Zhan, Hao; Zhong, Liuyi; Yan, Fangrong; Feng, Feng; Liu, Wenyuan; Xie, Ning
2016-07-01
A method of total ion chromatogram combined with chemometrics and mass defect filter was established for the prediction of active ingredients in Picrasma quassioides samples. The total ion chromatogram data of 28 batches were pretreated with wavelet transformation and correlation optimized warping to correct baseline drifts and retention time shifts. Then partial least squares regression was applied to construct a regression model to bridge the total ion chromatogram fingerprints and the antitumor activity of P. quassioides. Finally, the regression coefficients were used to predict the active peaks in total ion chromatogram fingerprints. In this strategy, mass defect filter was employed to classify and characterize the active peaks from a chemical point of view. A total of 17 constituents were predicted as the potential active compounds, 16 of which were identified as alkaloids by this developed approach. The results showed that the established method was not only simple and easy to operate, but also suitable to predict ultraviolet undetectable compounds and provide chemical information for the prediction of active compounds in herbs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Campbell, Nicole E; Greenaway, James; Henkin, Jack; Moorehead, Roger A; Petrik, Jim
2010-03-01
Epithelial ovarian cancer (EOC) comprises approximately 90% of ovarian cancers and arises from the surface epithelium. Typical treatment of EOC involves cytoreductive surgery combined with chemotherapy. More recent therapies have targeted the tumor vasculature using antiangiogenic compounds such as thrombospondin-1 (TSP-1). TSP-1 mimetic peptides such as ABT-510 have been created and have been in various clinical trials. We have previously shown that ABT-510 reduces abnormal vasculature associated with tumor tissue and increases the presence of mature blood vessels. It has been hypothesized that treatment with antiangiogenic compounds would allow increased delivery of cytotoxic agents and enhance treatment. In this study, we evaluated the potential role of ABT-510 and various chemotherapeutics (cisplatin and paclitaxel) on tumor progression, angiogenesis, and the benefits of combinational treatments on tissue uptake and perfusion using an orthotopic syngeneic mouse model of EOC. Animals were treated with ABT-510 (100 mg/kg per day) alone or in combination with cisplatin (2 mg/kg per 3 days) or paclitaxel (10 mg/kg per 2 days) at 60 days after tumor induction. Radiolabeled and fluorescently labeled paclitaxel demonstrated a significant increase in tumor uptake after ABT-510 treatment. Combined treatment with ABT-510 and cisplatin or paclitaxel resulted in a significant increase in tumor cell and tumor endothelial cell apoptosis and a resultant decrease in ovarian tumor size. Combined treatment also regressed secondary lesions and eliminated the presence of abdominal ascites. The results from this study show that through vessel normalization, ABT-510 increases uptake of chemotherapy drugs and can induce regression of advanced ovarian cancer.
Campbell, Nicole E; Greenaway, James; Henkin, Jack; Moorehead, Roger A; Petrik, Jim
2010-01-01
Epithelial ovarian cancer (EOC) comprises approximately 90% of ovarian cancers and arises from the surface epithelium. Typical treatment of EOC involves cytoreductive surgery combined with chemotherapy. More recent therapies have targeted the tumor vasculature using antiangiogenic compounds such as thrombospondin-1 (TSP-1). TSP-1 mimetic peptides such as ABT-510 have been created and have been in various clinical trials. We have previously shown that ABT-510 reduces abnormal vasculature associated with tumor tissue and increases the presence of mature blood vessels. It has been hypothesized that treatment with antiangiogenic compounds would allow increased delivery of cytotoxic agents and enhance treatment. In this study, we evaluated the potential role of ABT-510 and various chemotherapeutics (cisplatin and paclitaxel) on tumor progression, angiogenesis, and the benefits of combinational treatments on tissue uptake and perfusion using an orthotopic syngeneic mouse model of EOC. Animals were treated with ABT-510 (100 mg/kg per day) alone or in combination with cisplatin (2 mg/kg per 3 days) or paclitaxel (10 mg/kg per 2 days) at 60 days after tumor induction. Radiolabeled and fluorescently labeled paclitaxel demonstrated a significant increase in tumor uptake after ABT-510 treatment. Combined treatment with ABT-510 and cisplatin or paclitaxel resulted in a significant increase in tumor cell and tumor endothelial cell apoptosis and a resultant decrease in ovarian tumor size. Combined treatment also regressed secondary lesions and eliminated the presence of abdominal ascites. The results from this study show that through vessel normalization, ABT-510 increases uptake of chemotherapy drugs and can induce regression of advanced ovarian cancer. PMID:20234821
NASA Astrophysics Data System (ADS)
Tautz-Weinert, J.; Watson, S. J.
2016-09-01
Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.
Determination of riverbank erosion probability using Locally Weighted Logistic Regression
NASA Astrophysics Data System (ADS)
Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos
2015-04-01
Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.
Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883
Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.
2014-01-01
A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.
Development of Ensemble Model Based Water Demand Forecasting Model
NASA Astrophysics Data System (ADS)
Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop
2014-05-01
In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)
Menon, Ramkumar; Bhat, Geeta; Saade, George R; Spratt, Heidi
2014-04-01
To develop classification models of demographic/clinical factors and biomarker data from spontaneous preterm birth in African Americans and Caucasians. Secondary analysis of biomarker data using multivariate adaptive regression splines (MARS), a supervised machine learning algorithm method. Analysis of data on 36 biomarkers from 191 women was reduced by MARS to develop predictive models for preterm birth in African Americans and Caucasians. Maternal plasma, cord plasma collected at admission for preterm or term labor and amniotic fluid at delivery. Data were partitioned into training and testing sets. Variable importance, a relative indicator (0-100%) and area under the receiver operating characteristic curve (AUC) characterized results. Multivariate adaptive regression splines generated models for combined and racially stratified biomarker data. Clinical and demographic data did not contribute to the model. Racial stratification of data produced distinct models in all three compartments. In African Americans maternal plasma samples IL-1RA, TNF-α, angiopoietin 2, TNFRI, IL-5, MIP1α, IL-1β and TGF-α modeled preterm birth (AUC train: 0.98, AUC test: 0.86). In Caucasians TNFR1, ICAM-1 and IL-1RA contributed to the model (AUC train: 0.84, AUC test: 0.68). African Americans cord plasma samples produced IL-12P70, IL-8 (AUC train: 0.82, AUC test: 0.66). Cord plasma in Caucasians modeled IGFII, PDGFBB, TGF-β1 , IL-12P70, and TIMP1 (AUC train: 0.99, AUC test: 0.82). Amniotic fluid in African Americans modeled FasL, TNFRII, RANTES, KGF, IGFI (AUC train: 0.95, AUC test: 0.89) and in Caucasians, TNF-α, MCP3, TGF-β3 , TNFR1 and angiopoietin 2 (AUC train: 0.94 AUC test: 0.79). Multivariate adaptive regression splines models multiple biomarkers associated with preterm birth and demonstrated racial disparity. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.
Complex messages regarding a thin ideal appearing in teenage girls' magazines from 1956 to 2005.
Luff, Gina M; Gray, James J
2009-03-01
Seventeen and YM were assessed from 1956 through 2005 (n=312) to examine changes in the messages about thinness sent to teenage women. Trends were analyzed through an investigation of written, internal content focused on dieting, exercise, or both, while cover models were examined to explore fluctuations in body size. Pearson's Product correlations and weighted-least squares linear regression models were used to demonstrate changes over time. The frequency of written content related to exercise and combined plans increased in Seventeen, while a curvilinear relationship between time and content relating to dieting appeared. YM showed a linear increase in content related to dieting, exercise, and combined plans. Average cover model body size increased over time in YM while demonstrating no significant changes in Seventeen. Overall, more written messages about dieting and exercise appeared in teen's magazines in 2005 than before while the average cover model body size increased.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Floris, Giuseppe; Wozniak, Agnieszka; Sciot, Raf; Li, Haifu; Friedman, Lori; Van Looy, Thomas; Wellens, Jasmien; Vermaelen, Peter; Deroose, Christophe M.; Fletcher, Jonathan A.; Debiec-Rychter, Maria; Schöffski, Patrick
2015-01-01
Introduction Oncogenic signaling in gastrointestinal stromal tumors (GIST) is sustained via PI3K/AKT pathway. We used a panel of six GIST xenograft models to assess efficacy of GDC-0941 as single agent or in combination with imatinib (IMA). Experimental design Nude mice (n=136) were grafted bilaterally with human GIST carrying divers KIT mutations. Mice were orally dosed over four weeks, grouped as follows: A) control; B) GDC-0941; C) IMA and D) GDC+IMA treatments. Xenografts re-growth after treatment discontinuation was assessed in group C and D for additional four weeks. Tumor response was assessed by volume measurements, micro-PET imaging, histopathology and immunoblotting. Moreover genomic alterations in PTEN/PI3K/AKT pathway were evaluated. Results In all models, GDC-0941 caused tumor growth stabilization, inhibiting tumor cells proliferation but did not induce apoptosis. Under GDC+IMA, profound tumor regression, superior to either treatment alone, was observed. This effect was associated with the best histologic response, a nearly complete proliferation arrest and increased apoptosis. Tumor re-growth assays confirmed superior activity of GDC+IMA over IMA; in three out of six models tumor volume remained reduced and stable even after treatment discontinuation. A positive correlation between response to GDC+IMA and PTEN loss, both on gene and protein levels, was found. Conclusion GDC+IMA has significant antitumor efficacy in GIST xenografts, inducing more substantial tumor regression, apoptosis and durable effects than IMA. Notably, after treatment withdrawal, tumor regression was sustained in tumors exposed to GDC+IMA, which was not observed under IMA. Assessment of PTEN status may represent a useful predictive biomarker for patient selection. PMID:23231951
Floris, Giuseppe; Wozniak, Agnieszka; Sciot, Raf; Li, Haifu; Friedman, Lori; Van Looy, Thomas; Wellens, Jasmien; Vermaelen, Peter; Deroose, Christophe M; Fletcher, Jonathan A; Debiec-Rychter, Maria; Schöffski, Patrick
2013-02-01
Oncogenic signaling in gastrointestinal stromal tumors (GIST) is sustained via PI3K/AKT pathway. We used a panel of six GIST xenograft models to assess efficacy of GDC-0941 as single agent or in combination with imatinib (IMA). Nude mice (n = 136) were grafted bilaterally with human GIST carrying diverse KIT mutations. Mice were orally dosed over four weeks, grouped as follows: (A) control; (B) GDC-0941; (C) imatinib, and (D) GDC+IMA treatments. Xenografts regrowth after treatment discontinuation was assessed in groups C and D for an additional four weeks. Tumor response was assessed by volume measurements, micro-PET imaging, histopathology, and immunoblotting. Moreover, genomic alterations in PTEN/PI3K/AKT pathway were evaluated. In all models, GDC-0941 caused tumor growth stabilization, inhibiting tumor cell proliferation, but did not induce apoptosis. Under GDC+IMA, profound tumor regression, superior to either treatment alone, was observed. This effect was associated with the best histologic response, a nearly complete proliferation arrest and increased apoptosis. Tumor regrowth assays confirmed superior activity of GDC+IMA over imatinib; in three of six models, tumor volume remained reduced and stable even after treatment discontinuation. A positive correlation between response to GDC+IMA and PTEN loss, both on gene and protein levels, was found. GDC+IMA has significant antitumor efficacy in GIST xenografts, inducing more substantial tumor regression, apoptosis, and durable effects than imatinib. Notably, after treatment withdrawal, tumor regression was sustained in tumors exposed to GDC+IMA, which was not observed under imatinib. Assessment of PTEN status may represent a useful predictive biomarker for patient selection.
NASA Astrophysics Data System (ADS)
Noth, Elizabeth M.; Hammond, S. Katharine; Biging, Gregory S.; Tager, Ira B.
2011-05-01
BackgroundPolycyclic aromatic hydrocarbons (PAHs) are generated as a byproduct of combustion, and are associated with respiratory symptoms and increased risk of asthma attacks. ObjectivesTo assign daily, outdoor exposures to participants in the Fresno Asthmatic Children's Environment Study (FACES) using land use regression models for the sum of 4-, 5- and 6-ring PAHs (PAH456). MethodsPAH data were collected daily at the EPA Supersite in Fresno, CA from 10/2000 through 2/2007. From 2/2002 to 2/2003, intensive air pollution sampling was conducted at 83 homes of participants in the FACES study. These measurement data were combined with meteorological data, source data, and other spatial variables to form a land use regression model to assign daily exposure at all FACES homes for all years of the study (2001-2008). ResultsThe model for daily, outdoor residential PAH456 concentrations accounted for 80% of the between-home variability and 18% of the within-home variability. Both temporal and spatial variables were significant in the model. Traffic characteristics and home heating fuel were the main spatial explanatory variables. ConclusionsBecause spatial and temporal distributions of PAHs vary on an intra-urban scale, the location of the child's home within the urban setting plays an important role in the level of exposure that each child has to PAHs.
Boshoff, Magdalena; De Jonge, Maarten; Scheifler, Renaud; Bervoets, Lieven
2014-09-15
The aim of this study was to derive regression-based soil-plant models to predict and compare metal(loid) (i.e. As, Cd, Cu, Pb and Zn) concentrations in plants (grass Agrostis sp./Poa sp. and nettle Urtica dioica L.) among sites with a wide range of metal pollution and a wide variation in soil properties. Regression models were based on the pseudo total (aqua-regia) and exchangeable (0.01 M CaCl2) soil metal concentrations. Plant metal concentrations were best explained by the pseudo total soil metal concentrations in combination with soil properties. The most important soil property that influenced U. dioica metal concentrations was the clay content, while for grass organic matter (OM) and pH affected the As (OM) and Cu and Zn (pH). In this study multiple linear regression models proved functional in predicting metal accumulation in plants on a regional scale. With the proposed models based on the pseudo total metal concentration, the percentage of variation explained for the metals As, Cd, Cu, Pb and Zn were 0.56%, 0.47%, 0.59%, 0.61%, 0.30% in nettle and 0.46%, 0.38%, 0.27%, 0.50%, 0.28% in grass. Copyright © 2014 Elsevier B.V. All rights reserved.
Chowdhury, Nilotpal; Sapru, Shantanu
2015-01-01
Microarray analysis has revolutionized the role of genomic prognostication in breast cancer. However, most studies are single series studies, and suffer from methodological problems. We sought to use a meta-analytic approach in combining multiple publicly available datasets, while correcting for batch effects, to reach a more robust oncogenomic analysis. The aim of the present study was to find gene sets associated with distant metastasis free survival (DMFS) in systemically untreated, node-negative breast cancer patients, from publicly available genomic microarray datasets. Four microarray series (having 742 patients) were selected after a systematic search and combined. Cox regression for each gene was done for the combined dataset (univariate, as well as multivariate - adjusted for expression of Cell cycle related genes) and for the 4 major molecular subtypes. The centre and microarray batch effects were adjusted by including them as random effects variables. The Cox regression coefficients for each analysis were then ranked and subjected to a Gene Set Enrichment Analysis (GSEA). Gene sets representing protein translation were independently negatively associated with metastasis in the Luminal A and Luminal B subtypes, but positively associated with metastasis in Basal tumors. Proteinaceous extracellular matrix (ECM) gene set expression was positively associated with metastasis, after adjustment for expression of cell cycle related genes on the combined dataset. Finally, the positive association of the proliferation-related genes with metastases was confirmed. To the best of our knowledge, the results depicting mixed prognostic significance of protein translation in breast cancer subtypes are being reported for the first time. We attribute this to our study combining multiple series and performing a more robust meta-analytic Cox regression modeling on the combined dataset, thus discovering 'hidden' associations. This methodology seems to yield new and interesting results and may be used as a tool to guide new research.
Chowdhury, Nilotpal; Sapru, Shantanu
2015-01-01
Introduction Microarray analysis has revolutionized the role of genomic prognostication in breast cancer. However, most studies are single series studies, and suffer from methodological problems. We sought to use a meta-analytic approach in combining multiple publicly available datasets, while correcting for batch effects, to reach a more robust oncogenomic analysis. Aim The aim of the present study was to find gene sets associated with distant metastasis free survival (DMFS) in systemically untreated, node-negative breast cancer patients, from publicly available genomic microarray datasets. Methods Four microarray series (having 742 patients) were selected after a systematic search and combined. Cox regression for each gene was done for the combined dataset (univariate, as well as multivariate – adjusted for expression of Cell cycle related genes) and for the 4 major molecular subtypes. The centre and microarray batch effects were adjusted by including them as random effects variables. The Cox regression coefficients for each analysis were then ranked and subjected to a Gene Set Enrichment Analysis (GSEA). Results Gene sets representing protein translation were independently negatively associated with metastasis in the Luminal A and Luminal B subtypes, but positively associated with metastasis in Basal tumors. Proteinaceous extracellular matrix (ECM) gene set expression was positively associated with metastasis, after adjustment for expression of cell cycle related genes on the combined dataset. Finally, the positive association of the proliferation-related genes with metastases was confirmed. Conclusion To the best of our knowledge, the results depicting mixed prognostic significance of protein translation in breast cancer subtypes are being reported for the first time. We attribute this to our study combining multiple series and performing a more robust meta-analytic Cox regression modeling on the combined dataset, thus discovering 'hidden' associations. This methodology seems to yield new and interesting results and may be used as a tool to guide new research. PMID:26080057
Structural features that predict real-value fluctuations of globular proteins
Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke
2012-01-01
It is crucial to consider dynamics for understanding the biological function of proteins. We used a large number of molecular dynamics trajectories of non-homologous proteins as references and examined static structural features of proteins that are most relevant to fluctuations. We examined correlation of individual structural features with fluctuations and further investigated effective combinations of features for predicting the real-value of residue fluctuations using the support vector regression. It was found that some structural features have higher correlation than crystallographic B-factors with fluctuations observed in molecular dynamics trajectories. Moreover, support vector regression that uses combinations of static structural features showed accurate prediction of fluctuations with an average Pearson’s correlation coefficient of 0.669 and a root mean square error of 1.04 Å. This correlation coefficient is higher than the one observed for the prediction by the Gaussian network model. An advantage of the developed method over the Gaussian network models is that the former predicts the real-value of fluctuation. The results help improve our understanding of relationships between protein structure and fluctuation. Furthermore, the developed method provides a convienient practial way to predict fluctuations of proteins using easily computed static structural features of proteins. PMID:22328193
AGR-1 Thermocouple Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Einerson
2012-05-01
This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods tomore » further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of simulation results (Chapter 3). The statistics-based simulation-aided experimental control procedure described for the future AGR tests is developed and demonstrated in Chapter 4. The procedure for controlling the target fuel temperature (capsule peak or average) is based on regression functions of thermocouple readings and other relevant parameters and accounting for possible changes in both physical and thermal conditions and in instrument performance.« less
Cai, Rui; Wang, Shisheng; Tang, Bo; Li, Yueqing; Zhao, Weijie
2018-01-01
Sea cucumber is the major tonic seafood worldwide, and geographical origin traceability is an important part of its quality and safety control. In this work, a non-destructive method for origin traceability of sea cucumber (Apostichopus japonicus) from northern China Sea and East China Sea using near infrared spectroscopy (NIRS) and multivariate analysis methods was proposed. Total fat contents of 189 fresh sea cucumber samples were determined and partial least-squares (PLS) regression was used to establish the quantitative NIRS model. The ordered predictor selection algorithm was performed to select feasible wavelength regions for the construction of PLS and identification models. The identification model was developed by principal component analysis combined with Mahalanobis distance and scaling to the first range algorithms. In the test set of the optimum PLS models, the root mean square error of prediction was 0.45, and correlation coefficient was 0.90. The correct classification rates of 100% were obtained in both identification calibration model and test model. The overall results indicated that NIRS method combined with chemometric analysis was a suitable tool for origin traceability and identification of fresh sea cucumber samples from nine origins in China. PMID:29410795
Guo, Xiuhan; Cai, Rui; Wang, Shisheng; Tang, Bo; Li, Yueqing; Zhao, Weijie
2018-01-01
Sea cucumber is the major tonic seafood worldwide, and geographical origin traceability is an important part of its quality and safety control. In this work, a non-destructive method for origin traceability of sea cucumber ( Apostichopus japonicus ) from northern China Sea and East China Sea using near infrared spectroscopy (NIRS) and multivariate analysis methods was proposed. Total fat contents of 189 fresh sea cucumber samples were determined and partial least-squares (PLS) regression was used to establish the quantitative NIRS model. The ordered predictor selection algorithm was performed to select feasible wavelength regions for the construction of PLS and identification models. The identification model was developed by principal component analysis combined with Mahalanobis distance and scaling to the first range algorithms. In the test set of the optimum PLS models, the root mean square error of prediction was 0.45, and correlation coefficient was 0.90. The correct classification rates of 100% were obtained in both identification calibration model and test model. The overall results indicated that NIRS method combined with chemometric analysis was a suitable tool for origin traceability and identification of fresh sea cucumber samples from nine origins in China.
Velstra, Inge-Marie; Bolliger, Marc; Krebs, Jörg; Rietman, Johan S; Curt, Armin
2016-05-01
To determine which single or combined upper limb muscles as defined by the International Standards for the Neurological Classification of Spinal Cord Injury (ISNCSCI); upper extremity motor score (UEMS) and the Graded Redefined Assessment of Strength, Sensibility, and Prehension (GRASSP), best predict upper limb function and independence in activities of daily living (ADLs) and to assess the predictive value of qualitative grasp movements (QlG) on upper limb function in individuals with acute tetraplegia. As part of a Europe-wide, prospective, longitudinal, multicenter study ISNCSCI, GRASSP, and Spinal Cord Independence Measure (SCIM III) scores were recorded at 1 and 6 months after SCI. For prediction of upper limb function and ADLs, a logistic regression model and unbiased recursive partitioning conditional inference tree (URP-CTREE) were used. Results: Logistic regression and URP-CTREE revealed that a combination of ISNCSCI and GRASSP muscles (to a maximum of 4) demonstrated the best prediction (specificity and sensitivity ranged from 81.8% to 96.0%) of upper limb function and identified homogenous outcome cohorts at 6 months. The URP-CTREE model with the QlG predictors for upper limb function showed similar results. Prediction of upper limb function can be achieved through a combination of defined, specific upper limb muscles assessed in the ISNCSCI and GRASSP. A combination of a limited number of proximal and distal muscles along with an assessment of grasping movements can be applied for clinical decision making for rehabilitation interventions and clinical trials. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.
2014-01-01
High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.
Lee, Jung-Rok; Chan, Carmel T; Ruderman, Daniel; Chuang, Hui-Yen; Gaster, Richard S; Atallah, Michelle; Mallick, Parag; Lowe, Scott W; Gambhir, Sanjiv S; Wang, Shan X
2017-11-08
Each immunoglobulin isotype has unique immune effector functions. The contribution of these functions in the elimination of pathogens and tumors can be determined by monitoring quantitative temporal changes in isotype levels. Here, we developed a novel technique using magneto-nanosensors based on the effect of giant magnetoresistance (GMR) for longitudinal monitoring of total and antigen-specific isotype levels with high precision, using as little as 1 nL of serum. Combining in vitro serologic measurements with in vivo imaging techniques, we investigated the role of the antibody response in the regression of firefly luciferase (FL)-labeled lymphoma cells in spleen, kidney, and lymph nodes in a syngeneic Burkitt's lymphoma mouse model. Regression status was determined by whole body bioluminescent imaging (BLI). The magneto-nanosensors revealed that anti-FL IgG2a and total IgG2a were elevated and sustained in regression mice compared to non-regression mice (p < 0.05). This platform shows promise for monitoring immunotherapy, vaccination, and autoimmunity.
Ham, Jungoh; Costa, Carlotta; Sano, Renata; Lochmann, Timothy L.; Sennott, Erin M.; Patel, Neha U.; Dastur, Anahita; Gomez-Caraballo, Maria; Krytska, Kateryna; Hata, Aaron N.; Floros, Konstantinos V.; Hughes, Mark T.; Jakubik, Charles T.; Heisey, Daniel A.R.; Ferrell, Justin T.; Bristol, Molly L.; March, Ryan J.; Yates, Craig; Hicks, Mark A.; Nakajima, Wataru; Gowda, Madhu; Windle, Brad E.; Dozmorov, Mikhail G.; Garnett, Mathew J.; McDermott, Ultan; Harada, Hisashi; Taylor, Shirley M.; Morgan, Iain M.; Benes, Cyril H.; Engelman, Jeffrey A.; Mossé, Yael P.; Faber, Anthony C.
2016-01-01
Summary Fewer than half of children with high-risk neuroblastoma survive. Many of these tumors harbor high-level amplification of MYCN, which correlates with poor disease outcome. Using data from our large drug screen we predicted, and subsequently demonstrated, that MYCN-amplified neuroblastomas are sensitive to the BCL-2 inhibitor ABT-199. This sensitivity occurs in part through low anti-apoptotic BCL-xL expression, high pro-apoptotic NOXA expression, and paradoxical, MYCN-driven upregulation of NOXA. Screening for enhancers of ABT-199 sensitivity in MYCN-amplified neuroblastomas, we demonstrate that the Aurora Kinase A inhibitor MLN8237 combines with ABT-199 to induce widespread apoptosis. In diverse models of MYCN-amplified neuroblastoma, including a patient-derived xenograft model, this combination uniformly induced tumor shrinkage, and in multiple instances led to complete tumor regression. PMID:26859456
NASA Astrophysics Data System (ADS)
Nowicki, M. A.; Hearne, M.; Thompson, E.; Wald, D. J.
2012-12-01
Seismically induced landslides present a costly and often fatal threats in many mountainous regions. Substantial effort has been invested to understand where seismically induced landslides may occur in the future. Both slope-stability methods and, more recently, statistical approaches to the problem are described throughout the literature. Though some regional efforts have succeeded, no uniformly agreed-upon method is available for predicting the likelihood and spatial extent of seismically induced landslides. For use in the U. S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, we would like to routinely make such estimates, in near-real time, around the globe. Here we use the recently produced USGS ShakeMap Atlas of historic earthquakes to develop an empirical landslide probability model. We focus on recent events, yet include any digitally-mapped landslide inventories for which well-constrained ShakeMaps are also available. We combine these uniform estimates of the input shaking (e.g., peak acceleration and velocity) with broadly available susceptibility proxies, such as topographic slope and surface geology. The resulting database is used to build a predictive model of the probability of landslide occurrence with logistic regression. The landslide database includes observations from the Northridge, California (1994); Wenchuan, China (2008); ChiChi, Taiwan (1999); and Chuetsu, Japan (2004) earthquakes; we also provide ShakeMaps for moderate-sized events without landslide for proper model testing and training. The performance of the regression model is assessed with both statistical goodness-of-fit metrics and a qualitative review of whether or not the model is able to capture the spatial extent of landslides for each event. Part of our goal is to determine which variables can be employed based on globally-available data or proxies, and whether or not modeling results from one region are transferrable to geomorphologically-similar regions that lack proper calibration events. Combined with near-real time ShakeMaps, we anticipate using our model to make generalized predictions of whether or not (and if so, where) landslides are likely to occur for earthquakes around the globe; we also intend to incorporate this functionality into the USGS PAGER system.
Elovainio, Marko; van den Bos, Kees; Linna, Anne; Kivimäki, Mika; Ala-Mursula, Leena; Pentti, Jaana; Vahtera, Jussi
2005-12-01
We examined whether the combination of uncertainty (lack of work-time control, and negative changes at work) and organizational justice (i.e., justice of decision-making procedures and interpersonal treatment at work) contributes to sickness absence. A total of 7083 male and 24,317 female Finnish public sector employees completed questionnaires designed to assess organizational justice, workload and other factors. Hierarchical regression showed that after adjustment for age, income, and health behaviors low procedural and interactional justice were related to long sickness absence spells. In accordance with the uncertainty management model, these associations were dependent on experienced work-time control and perceived changes at work.
Pathological Narcissism and Interpersonal Behavior in Daily Life
Roche, Michael J.; Pincus, Aaron L.; Conroy, David E.; Hyde, Amanda L.; Ram, Nilam
2014-01-01
The Cognitive-Affective Processing System (CAPS) has been proposed as a useful meta-framework for integrating contextual differences in situations with individual differences in personality pathology. In this article, we evaluated the potential of combining the CAPS meta-framework and contemporary interpersonal theory to investigate how individual differences in pathological narcissism influenced interpersonal functioning in daily life. University students (N = 184) completed event-contingent reports about interpersonal interactions across a 7-day diary study. Using multilevel regression models, we found that combinations of narcissistic expression (grandiosity, vulnerability) were associated with different interpersonal behavior patterns reflective of interpersonal dysfunction. These results are among the first to empirically demonstrate the usefulness of the CAPS model to conceptualize personality pathology through the patterning of if-then interpersonal processes. PMID:23205698
Prediction of strontium bromide laser efficiency using cluster and decision tree analysis
NASA Astrophysics Data System (ADS)
Iliev, Iliycho; Gocheva-Ilieva, Snezhana; Kulin, Chavdar
2018-01-01
Subject of investigation is a new high-powered strontium bromide (SrBr2) vapor laser emitting in multiline region of wavelengths. The laser is an alternative to the atom strontium lasers and electron free lasers, especially at the line 6.45 μm which line is used in surgery for medical processing of biological tissues and bones with minimal damage. In this paper the experimental data from measurements of operational and output characteristics of the laser are statistically processed by means of cluster analysis and tree-based regression techniques. The aim is to extract the more important relationships and dependences from the available data which influence the increase of the overall laser efficiency. There are constructed and analyzed a set of cluster models. It is shown by using different cluster methods that the seven investigated operational characteristics (laser tube diameter, length, supplied electrical power, and others) and laser efficiency are combined in 2 clusters. By the built regression tree models using Classification and Regression Trees (CART) technique there are obtained dependences to predict the values of efficiency, and especially the maximum efficiency with over 95% accuracy.
NASA Astrophysics Data System (ADS)
Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade
2015-12-01
The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.
Imaging genetics approach to predict progression of Parkinson's diseases.
Mansu Kim; Seong-Jin Son; Hyunjin Park
2017-07-01
Imaging genetics is a tool to extract genetic variants associated with both clinical phenotypes and imaging information. The approach can extract additional genetic variants compared to conventional approaches to better investigate various diseased conditions. Here, we applied imaging genetics to study Parkinson's disease (PD). We aimed to extract significant features derived from imaging genetics and neuroimaging. We built a regression model based on extracted significant features combining genetics and neuroimaging to better predict clinical scores of PD progression (i.e. MDS-UPDRS). Our model yielded high correlation (r = 0.697, p <; 0.001) and low root mean squared error (8.36) between predicted and actual MDS-UPDRS scores. Neuroimaging (from 123 I-Ioflupane SPECT) predictors of regression model were computed from independent component analysis approach. Genetic features were computed using image genetics approach based on identified neuroimaging features as intermediate phenotypes. Joint modeling of neuroimaging and genetics could provide complementary information and thus have the potential to provide further insight into the pathophysiology of PD. Our model included newly found neuroimaging features and genetic variants which need further investigation.
NASA Astrophysics Data System (ADS)
Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia
2017-06-01
GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.
Spatial Bayesian Latent Factor Regression Modeling of Coordinate-based Meta-analysis Data
Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D.; Nichols, Thomas E.
2017-01-01
Summary Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the paper are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to 1) identify areas of consistent activation; and 2) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterised as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. PMID:28498564
Fire frequency in the Interior Columbia River Basin: Building regional models from fire history data
McKenzie, D.; Peterson, D.L.; Agee, James K.
2000-01-01
Fire frequency affects vegetation composition and successional pathways; thus it is essential to understand fire regimes in order to manage natural resources at broad spatial scales. Fire history data are lacking for many regions for which fire management decisions are being made, so models are needed to estimate past fire frequency where local data are not yet available. We developed multiple regression models and tree-based (classification and regression tree, or CART) models to predict fire return intervals across the interior Columbia River basin at 1-km resolution, using georeferenced fire history, potential vegetation, cover type, and precipitation databases. The models combined semiqualitative methods and rigorous statistics. The fire history data are of uneven quality; some estimates are based on only one tree, and many are not cross-dated. Therefore, we weighted the models based on data quality and performed a sensitivity analysis of the effects on the models of estimation errors that are due to lack of cross-dating. The regression models predict fire return intervals from 1 to 375 yr for forested areas, whereas the tree-based models predict a range of 8 to 150 yr. Both types of models predict latitudinal and elevational gradients of increasing fire return intervals. Examination of regional-scale output suggests that, although the tree-based models explain more of the variation in the original data, the regression models are less likely to produce extrapolation errors. Thus, the models serve complementary purposes in elucidating the relationships among fire frequency, the predictor variables, and spatial scale. The models can provide local managers with quantitative information and provide data to initialize coarse-scale fire-effects models, although predictions for individual sites should be treated with caution because of the varying quality and uneven spatial coverage of the fire history database. The models also demonstrate the integration of qualitative and quantitative methods when requisite data for fully quantitative models are unavailable. They can be tested by comparing new, independent fire history reconstructions against their predictions and can be continually updated, as better fire history data become available.
NASA Astrophysics Data System (ADS)
Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.
2015-05-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO2, Fe2O3, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na2O, K2O, TiO2, and P2O5, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144 channels) relative to the small number of samples studied. The best-performing models were SVR-Lin for SiO2, MgO, Fe2O3, and Na2O, lasso for Al2O3, elastic net for MnO, and PLS-1 for CaO, TiO2, and K2O. Although these differences in model performance between methods were identified, most of the models produce comparable results when p ≤ 0.05 and all techniques except kNN produced statistically-indistinguishable results. It is likely that a combination of models could be used together to yield a lower total error of prediction, depending on the requirements of the user.
NASA Astrophysics Data System (ADS)
Qie, G.; Wang, G.; Wang, M.
2016-12-01
Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images
Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia
NASA Astrophysics Data System (ADS)
Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg
2013-03-01
Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.
Koseki, Shige; Nonaka, Junko
2012-09-01
The objective of this study was to develop a probabilistic model to predict the end of lag time (λ) during the growth of Bacillus cereus vegetative cells as a function of temperature, pH, and salt concentration using logistic regression. The developed λ model was subsequently combined with a logistic differential equation to simulate bacterial numbers over time. To develop a novel model for λ, we determined whether bacterial growth had begun, i.e., whether λ had ended, at each time point during the growth kinetics. The growth of B. cereus was evaluated by optical density (OD) measurements in culture media for various pHs (5.5 ∼ 7.0) and salt concentrations (0.5 ∼ 2.0%) at static temperatures (10 ∼ 20°C). The probability of the end of λ was modeled using dichotomous judgments obtained at each OD measurement point concerning whether a significant increase had been observed. The probability of the end of λ was described as a function of time, temperature, pH, and salt concentration and showed a high goodness of fit. The λ model was validated with independent data sets of B. cereus growth in culture media and foods, indicating acceptable performance. Furthermore, the λ model, in combination with a logistic differential equation, enabled a simulation of the population of B. cereus in various foods over time at static and/or fluctuating temperatures with high accuracy. Thus, this newly developed modeling procedure enables the description of λ using observable environmental parameters without any conceptual assumptions and the simulation of bacterial numbers over time with the use of a logistic differential equation.
Faber, Anthony C.; Coffee, Erin M.; Costa, Carlotta; Dastur, Anahita; Ebi, Hiromichi; Hata, Aaron N.; Yeo, Alan T.; Edelman, Elena J.; Song, Youngchul; Tam, Ah Ting; Boisvert, Jessica L.; Milano, Randy J.; Roper, Jatin; Kodack, David P.; Jain, Rakesh K.; Corcoran, Ryan B.; Rivera, Miguel N.; Ramaswamy, Sridhar; Hung, Kenneth E.; Benes, Cyril H.; Engelman, Jeffrey A.
2014-01-01
Colorectal cancers (CRCs) harboring KRAS or BRAF mutations are refractory to current targeted therapies. Using data from a high-throughput drug screen, we have developed a novel therapeutic strategy that combines targeting of the apoptotic machinery using the BCL-2 family inhibitor ABT-263 (navitoclax) in combination with a TORC1/2 inhibitor, AZD8055. This combination leads to efficient apoptosis specifically in KRAS mutant (MT) and BRAF MT but not wild-type (WT) CRC cells. This specific susceptibility results from TORC1/2 inhibition leading to suppression of MCL-1 expression in mutant, but not WT CRCs, leading to abrogation of BIM/MCL-1 complexes. This combination strategy leads to tumor regressions in both KRAS MT colorectal cancer xenograft and genetically-engineered mouse models of CRC, but not in the corresponding KRAS WT CRC models. These data suggest that the combination of BCL-2/XL inhibitors with TORC1/2 inhibitors constitutes a promising targeted therapy strategy to treat these recalcitrant cancers. PMID:24163374
Monthly streamflow forecasting in the Rhine basin
NASA Astrophysics Data System (ADS)
Schick, Simon; Rössler, Ole; Weingartner, Rolf
2017-04-01
Forecasting seasonal streamflow of the Rhine river is of societal relevance as the Rhine is an important water way and water resource in Western Europe. The present study investigates the predictability of monthly mean streamflow at lead times of zero, one, and two months with the focus on potential benefits by the integration of seasonal climate predictions. Specifically, we use seasonal predictions of precipitation and surface air temperature released by the European Centre for Medium-Range Weather Forecasts (ECMWF) for a regression analysis. In order to disentangle forecast uncertainty, the 'Reverse Ensemble Streamflow Prediction' framework is adapted here to the context of regression: By using appropriate subsets of predictors the regression model is constrained to either the initial conditions, the meteorological forcing, or both. An operational application is mimicked by equipping the model with the seasonal climate predictions provided by ECMWF. Finally, to mitigate the spatial aggregation of the meteorological fields the model is also applied at the subcatchment scale, and the resulting predictions are combined afterwards. The hindcast experiment is carried out for the period 1982-2011 in cross validation mode at two gauging stations, namely the Rhine at Lobith and Basel. The results show that monthly forecasts are skillful with respect to climatology only at zero lead time. In addition, at zero lead time the integration of seasonal climate predictions decreases the mean absolute error by 5 to 10 percentage compared to forecasts which are solely based on initial conditions. This reduction most likely is induced by the seasonal prediction of precipitation and not air temperature. The study is completed by bench marking the regression model with runoff simulations from ECMWFs seasonal forecast system. By simply using basin averages followed by a linear bias correction, these runoff simulations translate well to monthly streamflow. Though the regression model is only slightly outperformed, we argue that runoff out of the land surface component of seasonal climate forecasting systems is an interesting option when it comes to seasonal streamflow forecasting in large river basins.
NASA Astrophysics Data System (ADS)
Gonçalves, Karen dos Santos; Winkler, Mirko S.; Benchimol-Barbosa, Paulo Roberto; de Hoogh, Kees; Artaxo, Paulo Eduardo; de Souza Hacon, Sandra; Schindler, Christian; Künzli, Nino
2018-07-01
Epidemiological studies generally use particulate matter measurements with diameter less 2.5 μm (PM2.5) from monitoring networks. Satellite aerosol optical depth (AOD) data has considerable potential in predicting PM2.5 concentrations, and thus provides an alternative method for producing knowledge regarding the level of pollution and its health impact in areas where no ground PM2.5 measurements are available. This is the case in the Brazilian Amazon rainforest region where forest fires are frequent sources of high pollution. In this study, we applied a non-linear model for predicting PM2.5 concentration from AOD retrievals using interaction terms between average temperature, relative humidity, sine, cosine of date in a period of 365,25 days and the square of the lagged relative residual. Regression performance statistics were tested comparing the goodness of fit and R2 based on results from linear regression and non-linear regression for six different models. The regression results for non-linear prediction showed the best performance, explaining on average 82% of the daily PM2.5 concentrations when considering the whole period studied. In the context of Amazonia, it was the first study predicting PM2.5 concentrations using the latest high-resolution AOD products also in combination with the testing of a non-linear model performance. Our results permitted a reliable prediction considering the AOD-PM2.5 relationship and set the basis for further investigations on air pollution impacts in the complex context of Brazilian Amazon Region.
Kamphuis, C; Frank, E; Burke, J K; Verkerk, G A; Jago, J G
2013-01-01
The hypothesis was that sensors currently available on farm that monitor behavioral and physiological characteristics have potential for the detection of lameness in dairy cows. This was tested by applying additive logistic regression to variables derived from sensor data. Data were collected between November 2010 and June 2012 on 5 commercial pasture-based dairy farms. Sensor data from weigh scales (liveweight), pedometers (activity), and milk meters (milking order, unadjusted and adjusted milk yield in the first 2 min of milking, total milk yield, and milking duration) were collected at every milking from 4,904 cows. Lameness events were recorded by farmers who were trained in detecting lameness before the study commenced. A total of 318 lameness events affecting 292 cows were available for statistical analyses. For each lameness event, the lame cow's sensor data for a time period of 14 d before observation date were randomly matched by farm and date to 10 healthy cows (i.e., cows that were not lame and had no other health event recorded for the matched time period). Sensor data relating to the 14-d time periods were used for developing univariable (using one source of sensor data) and multivariable (using multiple sources of sensor data) models. Model development involved the use of additive logistic regression by applying the LogitBoost algorithm with a regression tree as base learner. The model's output was a probability estimate for lameness, given the sensor data collected during the 14-d time period. Models were validated using leave-one-farm-out cross-validation and, as a result of this validation, each cow in the data set (318 lame and 3,180 nonlame cows) received a probability estimate for lameness. Based on the area under the curve (AUC), results indicated that univariable models had low predictive potential, with the highest AUC values found for liveweight (AUC=0.66), activity (AUC=0.60), and milking order (AUC=0.65). Combining these 3 sensors improved AUC to 0.74. Detection performance of this combined model varied between farms but it consistently and significantly outperformed univariable models across farms at a fixed specificity of 80%. Still, detection performance was not high enough to be implemented in practice on large, pasture-based dairy farms. Future research may improve performance by developing variables based on sensor data of liveweight, activity, and milking order, but that better describe changes in sensor data patterns when cows go lame. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Innovating patient care delivery: DSRIP's interrupted time series analysis paradigm.
Shenoy, Amrita G; Begley, Charles E; Revere, Lee; Linder, Stephen H; Daiger, Stephen P
2017-12-08
Adoption of Medicaid Section 1115 waiver is one of the many ways of innovating healthcare delivery system. The Delivery System Reform Incentive Payment (DSRIP) pool, one of the two funding pools of the waiver has four categories viz. infrastructure development, program innovation and redesign, quality improvement reporting and lastly, bringing about population health improvement. A metric of the fourth category, preventable hospitalization (PH) rate was analyzed in the context of eight conditions for two time periods, pre-reporting years (2010-2012) and post-reporting years (2013-2015) for two hospital cohorts, DSRIP participating and non-participating hospitals. The study explains how DSRIP impacted Preventable Hospitalization (PH) rates of eight conditions for both hospital cohorts within two time periods. Eight PH rates were regressed as the dependent variable with time, intervention and post-DSRIP Intervention as independent variables. PH rates of eight conditions were then consolidated into one rate for regressing with the above independent variables to evaluate overall impact of DSRIP. An interrupted time series regression was performed after accounting for auto-correlation, stationarity and seasonality in the dataset. In the individual regression model, PH rates showed statistically significant coefficients for seven out of eight conditions in DSRIP participating hospitals. In the combined regression model, the coefficient of the PH rate showed a statistically significant decrease with negative p-values for regression coefficients in DSRIP participating hospitals compared to positive/increased p-values for regression coefficients in DSRIP non-participating hospitals. Several macro- and micro-level factors may have likely contributed DSRIP hospitals outperforming DSRIP non-participating hospitals. Healthcare organization/provider collaboration, support from healthcare professionals, DSRIP's design, state reimbursement and coordination in care delivery methods may have led to likely success of DSRIP. IV, a retrospective cohort study based on longitudinal data. Copyright © 2017 Elsevier Inc. All rights reserved.
Yamazaki, Takeshi; Takeda, Hisato; Hagiya, Koichi; Yamaguchi, Satoshi; Sasaki, Osamu
2018-03-13
Because lactation periods in dairy cows lengthen with increasing total milk production, it is important to predict individual productivities after 305 days in milk (DIM) to determine the optimal lactation period. We therefore examined whether the random regression (RR) coefficient from 306 to 450 DIM (M2) can be predicted from those during the first 305 DIM (M1) by using a random regression model. We analyzed test-day milk records from 85690 Holstein cows in their first lactations and 131727 cows in their later (second to fifth) lactations. Data in M1 and M2 were analyzed separately by using different single-trait RR animal models. We then performed a multiple regression analysis of the RR coefficients of M2 on those of M1 during the first and later lactations. The first-order Legendre polynomials were practical covariates of random regression for the milk yields of M2. All RR coefficients for the additive genetic (AG) effect and the intercept for the permanent environmental (PE) effect of M2 had moderate to strong correlations with the intercept for the AG effect of M1. The coefficients of determination for multiple regression of the combined intercepts for the AG and PE effects of M2 on the coefficients for the AG effect of M1 were moderate to high. The daily milk yields of M2 predicted by using the RR coefficients for the AG effect of M1 were highly correlated with those obtained by using the coefficients of M2. Milk production after 305 DIM can be predicted by using the RR coefficient estimates of the AG effect during the first 305 DIM.
NASA Astrophysics Data System (ADS)
Ebrahimi, Hadi; Rajaee, Taher
2017-01-01
Simulation of groundwater level (GWL) fluctuations is an important task in management of groundwater resources. In this study, the effect of wavelet analysis on the training of the artificial neural network (ANN), multi linear regression (MLR) and support vector regression (SVR) approaches was investigated, and the ANN, MLR and SVR along with the wavelet-ANN (WNN), wavelet-MLR (WLR) and wavelet-SVR (WSVR) models were compared in simulating one-month-ahead of GWL. The only variable used to develop the models was the monthly GWL data recorded over a period of 11 years from two wells in the Qom plain, Iran. The results showed that decomposing GWL time series into several sub-time series, extremely improved the training of the models. For both wells 1 and 2, the Meyer and Db5 wavelets produced better results compared to the other wavelets; which indicated wavelet types had similar behavior in similar case studies. The optimal number of delays was 6 months, which seems to be due to natural phenomena. The best WNN model, using Meyer mother wavelet with two decomposition levels, simulated one-month-ahead with RMSE values being equal to 0.069 m and 0.154 m for wells 1 and 2, respectively. The RMSE values for the WLR model were 0.058 m and 0.111 m, and for WSVR model were 0.136 m and 0.060 m for wells 1 and 2, respectively.
Ciura, Krzesimir; Belka, Mariusz; Kawczak, Piotr; Bączek, Tomasz; Markuszewski, Michał J; Nowakowska, Joanna
2017-09-05
The objective of this paper is to build QSRR/QSAR model for predicting the blood-brain barrier (BBB) permeability. The obtained models are based on salting-out thin layer chromatography (SOTLC) constants and calculated molecular descriptors. Among chromatographic methods SOTLC was chosen, since the mobile phases are free of organic solvent. As consequences, there are less toxic, and have lower environmental impact compared to classical reserved phases liquid chromatography (RPLC). During the study three stationary phase silica gel, cellulose plates and neutral aluminum oxide were examined. The model set of solutes presents a wide range of log BB values, containing compounds which cross the BBB readily and molecules poorly distributed to the brain including drugs acting on the nervous system as well as peripheral acting drugs. Additionally, the comparison of three regression models: multiple linear regression (MLR), partial least-squares (PLS) and orthogonal partial least squares (OPLS) were performed. The designed QSRR/QSAR models could be useful to predict BBB of systematically synthesized newly compounds in the drug development pipeline and are attractive alternatives of time-consuming and demanding directed methods for log BB measurement. The study also shown that among several regression techniques, significant differences can be obtained in models performance, measured by R 2 and Q 2 , hence it is strongly suggested to evaluate all available options as MLR, PLS and OPLS. Copyright © 2017 Elsevier B.V. All rights reserved.
ESTIMATING TREATMENT EFFECTS ON HEALTHCARE COSTS UNDER EXOGENEITY: IS THERE A ‘MAGIC BULLET’?
Polsky, Daniel; Manning, Willard G.
2011-01-01
Methods for estimating average treatment effects, under the assumption of no unmeasured confounders, include regression models; propensity score adjustments using stratification, weighting, or matching; and doubly robust estimators (a combination of both). Researchers continue to debate about the best estimator for outcomes such as health care cost data, as they are usually characterized by an asymmetric distribution and heterogeneous treatment effects,. Challenges in finding the right specifications for regression models are well documented in the literature. Propensity score estimators are proposed as alternatives to overcoming these challenges. Using simulations, we find that in moderate size samples (n= 5000), balancing on propensity scores that are estimated from saturated specifications can balance the covariate means across treatment arms but fails to balance higher-order moments and covariances amongst covariates. Therefore, unlike regression model, even if a formal model for outcomes is not required, propensity score estimators can be inefficient at best and biased at worst for health care cost data. Our simulation study, designed to take a ‘proof by contradiction’ approach, proves that no one estimator can be considered the best under all data generating processes for outcomes such as costs. The inverse-propensity weighted estimator is most likely to be unbiased under alternate data generating processes but is prone to bias under misspecification of the propensity score model and is inefficient compared to an unbiased regression estimator. Our results show that there are no ‘magic bullets’ when it comes to estimating treatment effects in health care costs. Care should be taken before naively applying any one estimator to estimate average treatment effects in these data. We illustrate the performance of alternative methods in a cost dataset on breast cancer treatment. PMID:22199462
Black, L E; Brion, G M; Freitas, S J
2007-06-01
Predicting the presence of enteric viruses in surface waters is a complex modeling problem. Multiple water quality parameters that indicate the presence of human fecal material, the load of fecal material, and the amount of time fecal material has been in the environment are needed. This paper presents the results of a multiyear study of raw-water quality at the inlet of a potable-water plant that related 17 physical, chemical, and biological indices to the presence of enteric viruses as indicated by cytopathic changes in cell cultures. It was found that several simple, multivariate logistic regression models that could reliably identify observations of the presence or absence of total culturable virus could be fitted. The best models developed combined a fecal age indicator (the atypical coliform [AC]/total coliform [TC] ratio), the detectable presence of a human-associated sterol (epicoprostanol) to indicate the fecal source, and one of several fecal load indicators (the levels of Giardia species cysts, coliform bacteria, and coprostanol). The best fit to the data was found when the AC/TC ratio, the presence of epicoprostanol, and the density of fecal coliform bacteria were input into a simple, multivariate logistic regression equation, resulting in 84.5% and 78.6% accuracies for the identification of the presence and absence of total culturable virus, respectively. The AC/TC ratio was the most influential input variable in all of the models generated, but producing the best prediction required additional input related to the fecal source and the fecal load. The potential for replacing microbial indicators of fecal load with levels of coprostanol was proposed and evaluated by multivariate logistic regression modeling for the presence and absence of virus.
Pattern Recognition Analysis of Age-Related Retinal Ganglion Cell Signatures in the Human Eye
Yoshioka, Nayuta; Zangerl, Barbara; Nivison-Smith, Lisa; Khuu, Sieu K.; Jones, Bryan W.; Pfeiffer, Rebecca L.; Marc, Robert E.; Kalloniatis, Michael
2017-01-01
Purpose To characterize macular ganglion cell layer (GCL) changes with age and provide a framework to assess changes in ocular disease. This study used data clustering to analyze macular GCL patterns from optical coherence tomography (OCT) in a large cohort of subjects without ocular disease. Methods Single eyes of 201 patients evaluated at the Centre for Eye Health (Sydney, Australia) were retrospectively enrolled (age range, 20–85); 8 × 8 grid locations obtained from Spectralis OCT macular scans were analyzed with unsupervised classification into statistically separable classes sharing common GCL thickness and change with age. The resulting classes and gridwise data were fitted with linear and segmented linear regression curves. Additionally, normalized data were analyzed to determine regression as a percentage. Accuracy of each model was examined through comparison of predicted 50-year-old equivalent macular GCL thickness for the entire cohort to a true 50-year-old reference cohort. Results Pattern recognition clustered GCL thickness across the macula into five to eight spatially concentric classes. F-test demonstrated segmented linear regression to be the most appropriate model for macular GCL change. The pattern recognition–derived and normalized model revealed less difference between the predicted macular GCL thickness and the reference cohort (average ± SD 0.19 ± 0.92 and −0.30 ± 0.61 μm) than a gridwise model (average ± SD 0.62 ± 1.43 μm). Conclusions Pattern recognition successfully identified statistically separable macular areas that undergo a segmented linear reduction with age. This regression model better predicted macular GCL thickness. The various unique spatial patterns revealed by pattern recognition combined with core GCL thickness data provide a framework to analyze GCL loss in ocular disease. PMID:28632847
Yamagata, Tetsuo; Zanelli, Ugo; Gallemann, Dieter; Perrin, Dominique; Dolgos, Hugues; Petersson, Carl
2017-09-01
1. We compared direct scaling, regression model equation and the so-called "Poulin et al." methods to scale clearance (CL) from in vitro intrinsic clearance (CL int ) measured in human hepatocytes using two sets of compounds. One reference set comprised of 20 compounds with known elimination pathways and one external evaluation set based on 17 compounds development in Merck (MS). 2. A 90% prospective confidence interval was calculated using the reference set. This interval was found relevant for the regression equation method. The three outliers identified were justified on the basis of their elimination mechanism. 3. The direct scaling method showed a systematic underestimation of clearance in both the reference and evaluation sets. The "Poulin et al." and the regression equation methods showed no obvious bias in either the reference or evaluation sets. 4. The regression model equation was slightly superior to the "Poulin et al." method in the reference set and showed a better absolute average fold error (AAFE) of value 1.3 compared to 1.6. A larger difference was observed in the evaluation set were the regression method and "Poulin et al." resulted in an AAFE of 1.7 and 2.6, respectively (removing the three compounds with known issues mentioned above). A similar pattern was observed for the correlation coefficient. Based on these data we suggest the regression equation method combined with a prospective confidence interval as the first choice for the extrapolation of human in vivo hepatic metabolic clearance from in vitro systems.
Using Time Series Analysis to Predict Cardiac Arrest in a PICU.
Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P
2015-11-01
To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.
2014-01-01
Background There have been large-scale outbreaks of hand, foot and mouth disease (HFMD) in Mainland China over the last decade. These events varied greatly across the country. It is necessary to identify the spatial risk factors and spatial distribution patterns of HFMD for public health control and prevention. Climate risk factors associated with HFMD occurrence have been recognized. However, few studies discussed the socio-economic determinants of HFMD risk at a space scale. Methods HFMD records in Mainland China in May 2008 were collected. Both climate and socio-economic factors were selected as potential risk exposures of HFMD. Odds ratio (OR) was used to identify the spatial risk factors. A spatial autologistic regression model was employed to get OR values of each exposures and model the spatial distribution patterns of HFMD risk. Results Results showed that both climate and socio-economic variables were spatial risk factors for HFMD transmission in Mainland China. The statistically significant risk factors are monthly average precipitation (OR = 1.4354), monthly average temperature (OR = 1.379), monthly average wind speed (OR = 1.186), the number of industrial enterprises above designated size (OR = 17.699), the population density (OR = 1.953), and the proportion of student population (OR = 1.286). The spatial autologistic regression model has a good goodness of fit (ROC = 0.817) and prediction accuracy (Correct ratio = 78.45%) of HFMD occurrence. The autologistic regression model also reduces the contribution of the residual term in the ordinary logistic regression model significantly, from 17.25 to 1.25 for the odds ratio. Based on the prediction results of the spatial model, we obtained a map of the probability of HFMD occurrence that shows the spatial distribution pattern and local epidemic risk over Mainland China. Conclusions The autologistic regression model was used to identify spatial risk factors and model spatial risk patterns of HFMD. HFMD occurrences were found to be spatially heterogeneous over the Mainland China, which is related to both the climate and socio-economic variables. The combination of socio-economic and climate exposures can explain the HFMD occurrences more comprehensively and objectively than those with only climate exposures. The modeled probability of HFMD occurrence at the county level reveals not only the spatial trends, but also the local details of epidemic risk, even in the regions where there were no HFMD case records. PMID:24731248
Brügemann, K; Gernand, E; von Borstel, U U; König, S
2011-08-01
Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hamman, Josheph J; Hamlet, Alan F.; Fuller, Roger; Grossman, Eric E.
2016-01-01
Current understanding of the combined effects of sea level rise (SLR), storm surge, and changes in river flooding on near-coastal environments is very limited. This project uses a suite of numerical models to examine the combined effects of projected future climate change on flooding in the Skagit floodplain and estuary. Statistically and dynamically downscaled global climate model scenarios from the ECHAM-5 GCM were used as the climate forcings. Unregulated daily river flows were simulated using the VIC hydrology model, and regulated river flows were simulated using the SkagitSim reservoir operations model. Daily tidal anomalies (TA) were calculated using a regression approach based on ENSO and atmospheric pressure forcing simulated by the WRF regional climate model. A 2-D hydrodynamic model was used to estimate water surface elevations in the Skagit floodplain using resampled hourly hydrographs keyed to regulated daily flood flows produced by the reservoir simulation model, and tide predictions adjusted for SLR and TA. Combining peak annual TA with projected sea level rise, the historical (1970–1999) 100-yr peak high water level is exceeded essentially every year by the 2050s. The combination of projected sea level rise and larger floods by the 2080s yields both increased flood inundation area (+ 74%), and increased average water depth (+ 25 cm) in the Skagit floodplain during a 100-year flood. Adding sea level rise to the historical FEMA 100-year flood resulted in a 35% increase in inundation area by the 2040's, compared to a 57% increase when both SLR and projected changes in river flow were combined.
Multi-pose facial correction based on Gaussian process with combined kernel function
NASA Astrophysics Data System (ADS)
Shi, Shuyan; Ji, Ruirui; Zhang, Fan
2018-04-01
In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.
2015-12-01
Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be followed in the future.
Nattee, Cholwich; Khamsemanan, Nirattaya; Lawtrakul, Luckhana; Toochinda, Pisanu; Hannongbua, Supa
2017-01-01
Malaria is still one of the most serious diseases in tropical regions. This is due in part to the high resistance against available drugs for the inhibition of parasites, Plasmodium, the cause of the disease. New potent compounds with high clinical utility are urgently needed. In this work, we created a novel model using a regression tree to study structure-activity relationships and predict the inhibition constant, K i of three different antimalarial analogues (Trimethoprim, Pyrimethamine, and Cycloguanil) based on their molecular descriptors. To the best of our knowledge, this work is the first attempt to study the structure-activity relationships of all three analogues combined. The most relevant descriptors and appropriate parameters of the regression tree are harvested using extremely randomized trees. These descriptors are water accessible surface area, Log of the aqueous solubility, total hydrophobic van der Waals surface area, and molecular refractivity. Out of all possible combinations of these selected parameters and descriptors, the tree with the strongest coefficient of determination is selected to be our prediction model. Predicted K i values from the proposed model show a strong coefficient of determination, R 2 =0.996, to experimental K i values. From the structure of the regression tree, compounds with high accessible surface area of all hydrophobic atoms (ASA_H) and low aqueous solubility of inhibitors (Log S) generally possess low K i values. Our prediction model can also be utilized as a screening test for new antimalarial drug compounds which may reduce the time and expenses for new drug development. New compounds with high predicted K i should be excluded from further drug development. It is also our inference that a threshold of ASA_H greater than 575.80 and Log S less than or equal to -4.36 is a sufficient condition for a new compound to possess a low K i . Copyright © 2016 Elsevier Inc. All rights reserved.
Meta-regression analysis of the effect of trans fatty acids on low-density lipoprotein cholesterol.
Allen, Bruce C; Vincent, Melissa J; Liska, DeAnn; Haber, Lynne T
2016-12-01
We conducted a meta-regression of controlled clinical trial data to investigate quantitatively the relationship between dietary intake of industrial trans fatty acids (iTFA) and increased low-density lipoprotein cholesterol (LDL-C). Previous regression analyses included insufficient data to determine the nature of the dose response in the low-dose region and have nonetheless assumed a linear relationship between iTFA intake and LDL-C levels. This work contributes to the previous work by 1) including additional studies examining low-dose intake (identified using an evidence mapping procedure); 2) investigating a range of curve shapes, including both linear and nonlinear models; and 3) using Bayesian meta-regression to combine results across trials. We found that, contrary to previous assumptions, the linear model does not acceptably fit the data, while the nonlinear, S-shaped Hill model fits the data well. Based on a conservative estimate of the degree of intra-individual variability in LDL-C (0.1 mmoL/L), as an estimate of a change in LDL-C that is not adverse, a change in iTFA intake of 2.2% of energy intake (%en) (corresponding to a total iTFA intake of 2.2-2.9%en) does not cause adverse effects on LDL-C. The iTFA intake associated with this change in LDL-C is substantially higher than the average iTFA intake (0.5%en). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Linard, Joshua I.
2013-01-01
Mitigating the effects of salt and selenium on water quality in the Grand Valley and lower Gunnison River Basin in western Colorado is a major concern for land managers. Previous modeling indicated means to improve the models by including more detailed geospatial data and a more rigorous method for developing the models. After evaluating all possible combinations of geospatial variables, four multiple linear regression models resulted that could estimate irrigation-season salt yield, nonirrigation-season salt yield, irrigation-season selenium yield, and nonirrigation-season selenium yield. The adjusted r-squared and the residual standard error (in units of log-transformed yield) of the models were, respectively, 0.87 and 2.03 for the irrigation-season salt model, 0.90 and 1.25 for the nonirrigation-season salt model, 0.85 and 2.94 for the irrigation-season selenium model, and 0.93 and 1.75 for the nonirrigation-season selenium model. The four models were used to estimate yields and loads from contributing areas corresponding to 12-digit hydrologic unit codes in the lower Gunnison River Basin study area. Each of the 175 contributing areas was ranked according to its estimated mean seasonal yield of salt and selenium.
Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression
NASA Astrophysics Data System (ADS)
Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli
2018-06-01
Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.
Murakami, Takashi; Igarashi, Kentaro; Kawaguchi, Kei; Kiyuna, Tasuku; Zhang, Yong; Zhao, Ming; Hiroshima, Yukihiko; Nelson, Scott D; Dry, Sarah M; Li, Yunfeng; Yanagawa, Jane; Russell, Tara; Federman, Noah; Singh, Arun; Elliott, Irmina; Matsuyama, Ryusei; Chishima, Takashi; Tanaka, Kuniya; Endo, Itaru; Eilber, Fritz C; Hoffman, Robert M
2017-01-31
Osteosarcoma occurs mostly in children and young adults, who are treated with multiple agents in combination with limb-salvage surgery. However, the overall 5-year survival rate for patients with recurrent or metastatic osteosarcoma is 20-30% which has not improved significantly over 30 years. Refractory patients would benefit from precise individualized therapy. We report here that a patient-derived osteosarcoma growing in a subcutaneous nude-mouse model was regressed by tumor-targeting Salmonella typhimurium A1-R (S. typhimurium A1-R, p<0.001 compared to untreated control). The osteosarcoma was only partially sensitive to the molecular-targeting drug sorafenib, which did not arrest its growth. S. typhimurium A1-R was significantly more effective than sorafenib (P <0.001). S. typhimurium grew in the treated tumors and caused extensive necrosis of the tumor tissue. These data show that S. typhimurium A1-R is powerful therapy for an osteosarcoma patient-derived xenograft model.
Murakami, Takashi; Igarashi, Kentaro; Kawaguchi, Kei; Kiyuna, Tasuku; Zhang, Yong; Zhao, Ming; Hiroshima, Yukihiko; Nelson, Scott D.; Dry, Sarah M.; Li, Yunfeng; Yanagawa, Jane; Russell, Tara; Federman, Noah; Singh, Arun; Elliott, Irmina; Matsuyama, Ryusei; Chishima, Takashi; Tanaka, Kuniya; Endo, Itaru; Eilber, Fritz C.; Hoffman, Robert M.
2017-01-01
Osteosarcoma occurs mostly in children and young adults, who are treated with multiple agents in combination with limb-salvage surgery. However, the overall 5-year survival rate for patients with recurrent or metastatic osteosarcoma is 20-30% which has not improved significantly over 30 years. Refractory patients would benefit from precise individualized therapy. We report here that a patient-derived osteosarcoma growing in a subcutaneous nude-mouse model was regressed by tumor-targeting Salmonella typhimurium A1-R (S. typhimurium A1-R, p<0.001 compared to untreated control). The osteosarcoma was only partially sensitive to the molecular-targeting drug sorafenib, which did not arrest its growth. S. typhimurium A1-R was significantly more effective than sorafenib (P <0.001). S. typhimurium grew in the treated tumors and caused extensive necrosis of the tumor tissue. These data show that S. typhimurium A1-R is powerful therapy for an osteosarcoma patient-derived xenograft model. PMID:28030831
The prediction of food additives in the fruit juice based on electronic nose with chemometrics.
Qiu, Shanshan; Wang, Jun
2017-09-01
Food additives are added to products to enhance their taste, and preserve flavor or appearance. While their use should be restricted to achieve a technological benefit, the contents of food additives should be also strictly controlled. In this study, E-nose was applied as an alternative to traditional monitoring technologies for determining two food additives, namely benzoic acid and chitosan. For quantitative monitoring, support vector machine (SVM), random forest (RF), extreme learning machine (ELM) and partial least squares regression (PLSR) were applied to establish regression models between E-nose signals and the amount of food additives in fruit juices. The monitoring models based on ELM and RF reached higher correlation coefficients (R 2 s) and lower root mean square errors (RMSEs) than models based on PLSR and SVM. This work indicates that E-nose combined with RF or ELM can be a cost-effective, easy-to-build and rapid detection system for food additive monitoring. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lin, Hualiang; Guo, Yanfei; Kowal, Paul; Airhihenbuwa, Collins O; Di, Qian; Zheng, Yang; Zhao, Xing; Vaughn, Michael G; Howard, Steven; Schootman, Mario; Salinas-Rodriguez, Aaron; Yawson, Alfred E; Arokiasamy, Perianayagam; Manrique-Espinoza, Betty Soledad; Biritwum, Richard B; Rule, Stephen P; Minicuci, Nadia; Naidoo, Nirmala; Chatterji, Somnath; Qian, Zhengmin Min; Ma, Wenjun; Wu, Fan
2017-09-01
Background Little is known about the joint mental health effects of air pollution and tobacco smoking in low- and middle-income countries. Aims To investigate the effects of exposure to ambient fine particulate matter pollution (PM 2.5 ) and smoking and their combined (interactive) effects on depression. Method Multilevel logistic regression analysis of baseline data of a prospective cohort study ( n = 41 785). The 3-year average concentrations of PM 2.5 were estimated using US National Aeronautics and Space Administration satellite data, and depression was diagnosed using a standardised questionnaire. Three-level logistic regression models were applied to examine the associations with depression. Results The odds ratio (OR) for depression was 1.09 (95% C11.01-1.17) per 10 μg/m 3 increase in ambient PM 2.5 , and the association remained after adjusting for potential confounding factors (adjusted OR = 1.10, 95% CI 1.02-1.19). Tobacco smoking (smoking status, frequency, duration and amount) was also significantly associated with depression. There appeared to be a synergistic interaction between ambient PM 2.5 and smoking on depression in the additive model, but the interaction was not statistically significant in the multiplicative model. Conclusions Our study suggests that exposure to ambient PM 2.5 may increase the risk of depression, and smoking may enhance this effect. © The Royal College of Psychiatrists 2017.
Gulliver, John; Morley, David; Dunster, Chrissi; McCrea, Adrienne; van Nunen, Erik; Tsai, Ming-Yi; Probst-Hensch, Nicoltae; Eeftens, Marloes; Imboden, Medea; Ducret-Stich, Regina; Naccarati, Alessio; Galassi, Claudia; Ranzi, Andrea; Nieuwenhuijsen, Mark; Curto, Ariadna; Donaire-Gonzalez, David; Cirach, Marta; Vermeulen, Roel; Vineis, Paolo; Hoek, Gerard; Kelly, Frank J
2018-01-01
Oxidative potential (OP) of particulate matter (PM) is proposed as a biologically-relevant exposure metric for studies of air pollution and health. We aimed to evaluate the spatial variability of the OP of measured PM 2.5 using ascorbate (AA) and (reduced) glutathione (GSH), and develop land use regression (LUR) models to explain this spatial variability. We estimated annual average values (m -3 ) of OP AA and OP GSH for five areas (Basel, CH; Catalonia, ES; London-Oxford, UK (no OP GSH ); the Netherlands; and Turin, IT) using PM 2.5 filters. OP AA and OP GSH LUR models were developed using all monitoring sites, separately for each area and combined-areas. The same variables were then used in repeated sub-sampling of monitoring sites to test sensitivity of variable selection; new variables were offered where variables were excluded (p > .1). On average, measurements of OP AA and OP GSH were moderately correlated (maximum Pearson's maximum Pearson's R = = .7) with PM 2.5 and other metrics (PM 2.5 absorbance, NO 2 , Cu, Fe). HOV (hold-out validation) R 2 for OP AA models was .21, .58, .45, .53, and .13 for Basel, Catalonia, London-Oxford, the Netherlands and Turin respectively. For OP GSH , the only model achieving at least moderate performance was for the Netherlands (R 2 = .31). Combined models for OP AA and OP GSH were largely explained by study area with weak local predictors of intra-area contrasts; we therefore do not endorse them for use in epidemiologic studies. Given the moderate correlation of OP AA with other pollutants, the three reasonably performing LUR models for OP AA could be used independently of other pollutant metrics in epidemiological studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Prediction of Multiple Infections After Severe Burn Trauma: a Prospective Cohort Study
Yan, Shuangchun; Tsurumi, Amy; Que, Yok-Ai; Ryan, Colleen M.; Bandyopadhaya, Arunava; Morgan, Alexander A.; Flaherty, Patrick J.; Tompkins, Ronald G.; Rahme, Laurence G.
2014-01-01
Objective To develop predictive models for early triage of burn patients based on hyper-susceptibility to repeated infections. Background Infection remains a major cause of mortality and morbidity after severe trauma, demanding new strategies to combat infections. Models for infection prediction are lacking. Methods Secondary analysis of 459 burn patients (≥16 years old) with ≥20% total body surface area burns recruited from six US burn centers. We compared blood transcriptomes with a 180-h cut-off on the injury-to-transcriptome interval of 47 patients (≤1 infection episode) to those of 66 hyper-susceptible patients (multiple [≥2] infection episodes [MIE]). We used LASSO regression to select biomarkers and multivariate logistic regression to built models, accuracy of which were assessed by area under receiver operating characteristic curve (AUROC) and cross-validation. Results Three predictive models were developed covariates of: (1) clinical characteristics; (2) expression profiles of 14 genomic probes; (3) combining (1) and (2). The genomic and clinical models were highly predictive of MIE status (AUROCGenomic = 0.946 [95% CI, 0.906–0.986]); AUROCClinical = 0.864 [CI, 0.794–0.933]; AUROCGenomic/AUROCClinical P = 0.044). Combined model has an increased AUROCCombined of 0.967 (CI, 0.940–0.993) compared to the individual models (AUROCCombined/AUROCClinical P = 0.0069). Hyper-susceptible patients show early alterations in immune-related signaling pathways, epigenetic modulation and chromatin remodeling. Conclusions Early triage of burn patients more susceptible to infections can be made using clinical characteristics and/or genomic signatures. Genomic signature suggests new insights into the pathophysiology of hyper-susceptibility to infection may lead to novel potential therapeutic or prophylactic targets. PMID:24950278
NASA Astrophysics Data System (ADS)
Medina, Hanoi; Tian, Di; Srivastava, Puneet; Pelosi, Anna; Chirico, Giovanni B.
2018-07-01
Reference evapotranspiration (ET0) plays a fundamental role in agronomic, forestry, and water resources management. Estimating and forecasting ET0 have long been recognized as a major challenge for researchers and practitioners in these communities. This work explored the potential of multiple leading numerical weather predictions (NWPs) for estimating and forecasting summer ET0 at 101 U.S. Regional Climate Reference Network stations over nine climate regions across the contiguous United States (CONUS). Three leading global NWP model forecasts from THORPEX Interactive Grand Global Ensemble (TIGGE) dataset were used in this study, including the single model ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (EC), the National Centers for Environmental Prediction Global Forecast System (NCEP), and the United Kingdom Meteorological Office forecasts (MO), as well as multi-model ensemble forecasts from the combinations of these NWP models. A regression calibration was employed to bias correct the ET0 forecasts. Impact of individual forecast variables on ET0 forecasts were also evaluated. The results showed that the EC forecasts provided the least error and highest skill and reliability, followed by the MO and NCEP forecasts. The multi-model ensembles constructed from the combination of EC and MO forecasts provided slightly better performance than the single model EC forecasts. The regression process greatly improved ET0 forecast performances, particularly for the regions involving stations near the coast, or with a complex orography. The performance of EC forecasts was only slightly influenced by the size of the ensemble members, particularly at short lead times. Even with less ensemble members, EC still performed better than the other two NWPs. Errors in the radiation forecasts, followed by those in the wind, had the most detrimental effects on the ET0 forecast performances.
Liabeuf, Debora; Sim, Sung-Chur; Francis, David M
2018-03-01
Bacterial spot affects tomato crops (Solanum lycopersicum) grown under humid conditions. Major genes and quantitative trait loci (QTL) for resistance have been described, and multiple loci from diverse sources need to be combined to improve disease control. We investigated genomic selection (GS) prediction models for resistance to Xanthomonas euvesicatoria and experimentally evaluated the accuracy of these models. The training population consisted of 109 families combining resistance from four sources and directionally selected from a population of 1,100 individuals. The families were evaluated on a plot basis in replicated inoculated trials and genotyped with single nucleotide polymorphisms (SNP). We compared the prediction ability of models developed with 14 to 387 SNP. Genomic estimated breeding values (GEBV) were derived using Bayesian least absolute shrinkage and selection operator regression (BL) and ridge regression (RR). Evaluations were based on leave-one-out cross validation and on empirical observations in replicated field trials using the next generation of inbred progeny and a hybrid population resulting from selections in the training population. Prediction ability was evaluated based on correlations between GEBV and phenotypes (r g ), percentage of coselection between genomic and phenotypic selection, and relative efficiency of selection (r g /r p ). Results were similar with BL and RR models. Models using only markers previously identified as significantly associated with resistance but weighted based on GEBV and mixed models with markers associated with resistance treated as fixed effects and markers distributed in the genome treated as random effects offered greater accuracy and a high percentage of coselection. The accuracy of these models to predict the performance of progeny and hybrids exceeded the accuracy of phenotypic selection.
Tate, Sonya C; Burke, Teresa F; Hartman, Daisy; Kulanthaivel, Palaniappan; Beckmann, Richard P; Cronier, Damien M
2016-01-01
Background: Resistance to BRAF inhibition is a major cause of treatment failure for BRAF-mutated metastatic melanoma patients. Abemaciclib, a cyclin-dependent kinase 4 and 6 inhibitor, overcomes this resistance in xenograft tumours and offers a promising drug combination. The present work aims to characterise the quantitative pharmacology of the abemaciclib/vemurafenib combination using a semimechanistic pharmacokinetic/pharmacodynamic modelling approach and to identify an optimum dosing regimen for potential clinical evaluation. Methods: A PK/biomarker model was developed to connect abemaciclib/vemurafenib concentrations to changes in MAPK and cell cycle pathway biomarkers in A375 BRAF-mutated melanoma xenografts. Resultant tumour growth inhibition was described by relating (i) MAPK pathway inhibition to apoptosis, (ii) mitotic cell density to tumour growth and, under resistant conditions, (iii) retinoblastoma protein inhibition to cell survival. Results: The model successfully described vemurafenib/abemaciclib-mediated changes in MAPK pathway and cell cycle biomarkers. Initial tumour shrinkage by vemurafenib, acquisition of resistance and subsequent abemaciclib-mediated efficacy were successfully captured and externally validated. Model simulations illustrate the benefit of intermittent vemurafenib therapy over continuous treatment, and indicate that continuous abemaciclib in combination with intermittent vemurafenib offers the potential for considerable tumour regression. Conclusions: The quantitative pharmacology of the abemaciclib/vemurafenib combination was successfully characterised and an optimised, clinically-relevant dosing strategy was identified. PMID:26978007
Skorupski, Katherine A.; O'Brien, Thomas G.; Guerrero, Teri; Rodriguez, Carlos O.; Burns, Mark R.
2011-01-01
Polyamines are essential for cell proliferation. Their production is dysregulated in many cancers and polyamine depletion leads to tumor regression in mouse models of SCC. The purpose of this study was to determine the maximally tolerated dose of the polyamine transport inhibitor, MQT 1426, when combined with the ornithine decarboxylase inhibitor, DFMO, and to determine whether this therapy results in reduction in tumor polyamine levels. Thirteen cats with oral SCC received both drugs orally and serial tumor biopsies were obtained for polyamine measurement. Cats were monitored for response to therapy and toxicity. A maximum tolerated dose of MQT 1426 when combined with DFMO was determined. Dose limiting toxicity was vestibular in nature, but was fully reversible. Spermidine and total polyamine levels decreased significantly in tissues, 2 cats experienced objective tumor regression, and 6 cats had stable disease. These results suggest that further study of polyamine depletion therapies is warranted. PMID:22077408
Schwartz, Laurent; Guais, Adeline; Israël, Maurice; Junod, Bernard; Steyaert, Jean-Marc; Crespi, Elisabetta; Baronzio, Gianfranco; Abolhassani, Mohammad
2013-04-01
Cellular metabolic alterations are now well described as implicated in cancer and some strategies are currently developed to target these different pathways. In previous papers, we demonstrated that a combination of molecules (namely alpha-lipoic acid and hydroxycitrate, i.e. Metabloc™) targeting the cancer metabolism markedly decreased tumor cell growth in mice. In this work, we demonstrate that the addition of capsaicin further delays tumor growth in mice in a dose dependant manner. This is true for the three animal model tested: lung (LLC) cancer, bladder cancer (MBT-2) and melanoma B16F10. There was no apparent side effect of this ternary combination. The addition of a fourth drug (octreotide) is even more effective resulting in tumor regression in mice bearing LLC cancer. These four compounds are all known to target the cellular metabolism not its DNA. The efficacy, the apparent lack of toxicity, the long clinical track records of these medications in human medicine, all points toward the need for a clinical trial. The dramatic efficacy of treatment suggests that cancer may simply be a disease of dysregulated cellular metabolism.
An, Lihua; Fung, Karen Y; Krewski, Daniel
2010-09-01
Spontaneous adverse event reporting systems are widely used to identify adverse reactions to drugs following their introduction into the marketplace. In this article, a James-Stein type shrinkage estimation strategy was developed in a Bayesian logistic regression model to analyze pharmacovigilance data. This method is effective in detecting signals as it combines information and borrows strength across medically related adverse events. Computer simulation demonstrated that the shrinkage estimator is uniformly better than the maximum likelihood estimator in terms of mean squared error. This method was used to investigate the possible association of a series of diabetic drugs and the risk of cardiovascular events using data from the Canada Vigilance Online Database.
Integrative eQTL analysis of tumor and host omics data in individuals with bladder cancer.
Pineda, Silvia; Van Steen, Kristel; Malats, Núria
2017-09-01
Integrative analyses of several omics data are emerging. The data are usually generated from the same source material (i.e., tumor sample) representing one level of regulation. However, integrating different regulatory levels (i.e., blood) with those from tumor may also reveal important knowledge about the human genetic architecture. To model this multilevel structure, an integrative-expression quantitative trait loci (eQTL) analysis applying two-stage regression (2SR) was proposed. This approach first regressed tumor gene expression levels with tumor markers and the adjusted residuals from the previous model were then regressed with the germline genotypes measured in blood. Previously, we demonstrated that penalized regression methods in combination with a permutation-based MaxT method (Global-LASSO) is a promising tool to fix some of the challenges that high-throughput omics data analysis imposes. Here, we assessed whether Global-LASSO can also be applied when tumor and blood omics data are integrated. We further compared our strategy with two 2SR-approaches, one using multiple linear regression (2SR-MLR) and other using LASSO (2SR-LASSO). We applied the three models to integrate genomic, epigenomic, and transcriptomic data from tumor tissue with blood germline genotypes from 181 individuals with bladder cancer included in the TCGA Consortium. Global-LASSO provided a larger list of eQTLs than the 2SR methods, identified a previously reported eQTLs in prostate stem cell antigen (PSCA), and provided further clues on the complexity of APBEC3B loci, with a minimal false-positive rate not achieved by 2SR-MLR. It also represents an important contribution for omics integrative analysis because it is easy to apply and adaptable to any type of data. © 2017 WILEY PERIODICALS, INC.
An application of robust ridge regression model in the presence of outliers to real data problem
NASA Astrophysics Data System (ADS)
Shariff, N. S. Md.; Ferdaos, N. A.
2017-09-01
Multicollinearity and outliers are often leads to inconsistent and unreliable parameter estimates in regression analysis. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is believed are affected by the presence of outlier. The combination of GM-estimation and ridge parameter that is robust towards both problems is on interest in this study. As such, both techniques are employed to investigate the relationship between stock market price and macroeconomic variables in Malaysia due to curiosity of involving the multicollinearity and outlier problem in the data set. There are four macroeconomic factors selected for this study which are Consumer Price Index (CPI), Gross Domestic Product (GDP), Base Lending Rate (BLR) and Money Supply (M1). The results demonstrate that the proposed procedure is able to produce reliable results towards the presence of multicollinearity and outliers in the real data.
Esbaugh, A J; Brix, K V; Mager, E M; Grosell, M
2011-09-01
The current study examined the acute toxicity of lead (Pb) to Ceriodaphnia dubia and Pimephales promelas in a variety of natural waters. The natural waters were selected to range in pertinent water chemistry parameters such as calcium, pH, total CO(2) and dissolved organic carbon (DOC). Acute toxicity was determined for C. dubia and P. promelas using standard 48h and 96h protocols, respectively. For both organisms acute toxicity varied markedly according to water chemistry, with C. dubia LC50s ranging from 29 to 180μg/L and P. promelas LC50s ranging from 41 to 3598μg/L. Additionally, no Pb toxicity was observed for P. promelas in three alkaline natural waters. With respect to water chemistry parameters, DOC had the strongest protective impact for both organisms. A multi-linear regression (MLR) approach combining previous lab data and the current data was used to identify the relative importance of individual water chemistry components in predicting acute Pb toxicity for both species. As anticipated, the P. promelas best-fit MLR model combined DOC, calcium and pH. Unexpectedly, in the C. dubiaMLR model the importance of pH, TCO(2) and calcium was minimal while DOC and ionic strength were the controlling water quality variables. Adjusted R(2) values of 0.82 and 0.64 for the P. promelas and C. dubia models, respectively, are comparable to previously developed biotic ligand models for other metals. Copyright © 2011 Elsevier Inc. All rights reserved.
Rimaityte, Ingrida; Ruzgas, Tomas; Denafas, Gintaras; Racys, Viktoras; Martuzevicius, Dainius
2012-01-01
Forecasting of generation of municipal solid waste (MSW) in developing countries is often a challenging task due to the lack of data and selection of suitable forecasting method. This article aimed to select and evaluate several methods for MSW forecasting in a medium-scaled Eastern European city (Kaunas, Lithuania) with rapidly developing economics, with respect to affluence-related and seasonal impacts. The MSW generation was forecast with respect to the economic activity of the city (regression modelling) and using time series analysis. The modelling based on social-economic indicators (regression implemented in LCA-IWM model) showed particular sensitivity (deviation from actual data in the range from 2.2 to 20.6%) to external factors, such as the synergetic effects of affluence parameters or changes in MSW collection system. For the time series analysis, the combination of autoregressive integrated moving average (ARIMA) and seasonal exponential smoothing (SES) techniques were found to be the most accurate (mean absolute percentage error equalled to 6.5). Time series analysis method was very valuable for forecasting the weekly variation of waste generation data (r (2) > 0.87), but the forecast yearly increase should be verified against the data obtained by regression modelling. The methods and findings of this study may assist the experts, decision-makers and scientists performing forecasts of MSW generation, especially in developing countries.
Carbon emissions risk map from deforestation in the tropical Amazon
NASA Astrophysics Data System (ADS)
Ometto, J.; Soler, L. S.; Assis, T. D.; Oliveira, P. V.; Aguiar, A. P.
2011-12-01
Assis, Pedro Valle This work aims to estimate the carbon emissions from tropical deforestation in the Brazilian Amazon associated to the risk assessment of future land use change. The emissions are estimated by incorporating temporal deforestation dynamics, accounting for the biophysical and socioeconomic heterogeneity in the region, as well secondary forest growth dynamic in abandoned areas. The land cover change model that supported the risk assessment of deforestation, was run based on linear regressions. This method takes into account spatial heterogeneity of deforestation as the spatial variables adopted to fit the final regression model comprise: environmental aspects, economic attractiveness, accessibility and land tenure structure. After fitting a suitable regression models for each land cover category, the potential of each cell to be deforested (25x25km and 5x5 km of resolution) in the near future was used to calculate the risk assessment of land cover change. The carbon emissions model combines high-resolution new forest clear-cut mapping and four alternative sources of spatial information on biomass distribution for different vegetation types. The risk assessment map of CO2 emissions, was obtained by crossing the simulation results of the historical land cover changes to a map of aboveground biomass contained in the remaining forest. This final map represents the risk of CO2 emissions at 25x25km and 5x5 km until 2020, under a scenario of carbon emission reduction target.
Moyer, Douglas; Hirsch, Robert M.; Hyer, Kenneth
2012-01-01
Nutrient and sediment fluxes and changes in fluxes over time are key indicators that water resource managers can use to assess the progress being made in improving the structure and function of the Chesapeake Bay ecosystem. The U.S. Geological Survey collects annual nutrient (nitrogen and phosphorus) and sediment flux data and computes trends that describe the extent to which water-quality conditions are changing within the major Chesapeake Bay tributaries. Two regression-based approaches were compared for estimating annual nutrient and sediment fluxes and for characterizing how these annual fluxes are changing over time. The two regression models compared are the traditionally used ESTIMATOR and the newly developed Weighted Regression on Time, Discharge, and Season (WRTDS). The model comparison focused on answering three questions: (1) What are the differences between the functional form and construction of each model? (2) Which model produces estimates of flux with the greatest accuracy and least amount of bias? (3) How different would the historical estimates of annual flux be if WRTDS had been used instead of ESTIMATOR? One additional point of comparison between the two models is how each model determines trends in annual flux once the year-to-year variations in discharge have been determined. All comparisons were made using total nitrogen, nitrate, total phosphorus, orthophosphorus, and suspended-sediment concentration data collected at the nine U.S. Geological Survey River Input Monitoring stations located on the Susquehanna, Potomac, James, Rappahannock, Appomattox, Pamunkey, Mattaponi, Patuxent, and Choptank Rivers in the Chesapeake Bay watershed. Two model characteristics that uniquely distinguish ESTIMATOR and WRTDS are the fundamental model form and the determination of model coefficients. ESTIMATOR and WRTDS both predict water-quality constituent concentration by developing a linear relation between the natural logarithm of observed constituent concentration and three explanatory variables—the natural log of discharge, time, and season. ESTIMATOR uses two additional explanatory variables—the square of the log of discharge and time-squared. Both models determine coefficients for variables for a series of estimation windows. ESTIMATOR establishes variable coefficients for a series of 9-year moving windows; all observed constituent concentration data within the 9-year window are used to establish each coefficient. Conversely, WRTDS establishes variable coefficients for each combination of discharge and time using only observed concentration data that are similar in time, season, and discharge to the day being estimated. As a result of these distinguishing characteristics, ESTIMATOR reproduces concentration-discharge relations that are closely approximated by a quadratic or linear function with respect to both the log of discharge and time. Conversely, the linear model form of WRTDS coupled with extensive model windowing for each combination of discharge and time allows WRTDS to reproduce observed concentration-discharge relations that are more sinuous in form. Another distinction between ESTIMATOR and WRTDS is the reporting of uncertainty associated with the model estimates of flux and trend. ESTIMATOR quantifies the standard error of prediction associated with the determination of flux and trends. The standard error of prediction enables the determination of the 95-percent confidence intervals for flux and trend as well as the ability to test whether the reported trend is significantly different from zero (where zero equals no trend). Conversely, WRTDS is unable to propagate error through the many (over 5,000) models for unique combinations of flow and time to determine a total standard error. As a result, WRTDS flux estimates are not reported with confidence intervals and a level of significance is not determined for flow-normalized fluxes. The differences between ESTIMATOR and WRTDS, with regard to model form and determination of model coefficients, have an influence on the determination of nutrient and sediment fluxes and associated changes in flux over time as a result of management activities. The comparison between the model estimates of flux and trend was made for combinations of five water-quality constituents at nine River Input Monitoring stations. The major findings with regard to nutrient and sediment fluxes are as follows: (1)WRTDS produced estimates of flux for all combinations that were more accurate, based on reduction in root mean squared error, than flux estimates from ESTIMATOR; (2) for 67 percent of the combinations, WRTDS and ESTIMATOR both produced estimates of flux that were minimally biased compared to observed fluxes(flux bias = tendency to over or underpredict flux observations); however, for 33 percent of the combinations, WRTDS produced estimates of flux that were considerably less biased (by at least 10 percent) than flux estimates from ESTIMATOR; (3) the average percent difference in annual fluxes generated by ESTIMATOR and WRTDS was less than 10 percent at 80 percent of the combinations; and (4) the greatest differences related to flux bias and annual fluxes all occurred for combinations where the pattern in observed concentration-discharge relation was sinuous (two points of inflection) rather than linear or quadratic (zero or one point of inflection). The major findings with regard to trends are as follows: (1) both models produce water-quality trends that have factored in the year-to-year variations in flow; (2) trends in water-quality condition are represented by ESTIMATOR as a trend in flow-adjusted concentration and by WRTDS as a flow normalized flux; (3) for 67 percent of the combinations with trend estimates, the WRTDS trends in flow-normalized flux are in the same direction and magnitude to the ESTIMATOR trends in flow-adjusted concentration, and at the remaining 33 percent the differences in trend magnitude and direction are related to fundamental differences between concentration and flux; and (4) the majority (85 percent) of the total nitrogen, nitrate, and orthophosphorus combinations exhibited long-term (1985 to 2010) trends in WRTDS flow-normalized flux that indicate improvement or reduction in associated flux and the majority (83 percent) of the total phosphorus (from 1985 to 2010) and suspended sediment (from 2001 to 2010) combinations exhibited trends in WRTDS flow-normalized flux that indicate degradation or increases in the flux delivered.
Zhang, Qiuzhuo; Weng, Chen; Huang, Huiqin; Achal, Varenyam; Wang, Duanchao
2016-01-01
Water hyacinth was used as substrate for bioethanol production in the present study. Combination of acid pretreatment and enzymatic hydrolysis was the most effective process for sugar production that resulted in the production of 402.93 mg reducing sugar at optimal condition. A regression model was built to optimize the fermentation factors according to response surface method in saccharification and fermentation (SSF) process. The optimized condition for ethanol production by SSF process was fermented at 38.87°C in 81.87 h when inoculated with 6.11 ml yeast, where 1.291 g/L bioethanol was produced. Meanwhile, 1.289 g/L ethanol was produced during experimentation, which showed reliability of presented regression model in this research. The optimization method discussed in the present study leading to relatively high bioethanol production could provide a promising way for Alien Invasive Species with high cellulose content. PMID:26779125
NASA Astrophysics Data System (ADS)
Aye, S. A.; Heyns, P. S.
2017-02-01
This paper proposes an optimal Gaussian process regression (GPR) for the prediction of remaining useful life (RUL) of slow speed bearings based on a novel degradation assessment index obtained from acoustic emission signal. The optimal GPR is obtained from an integration or combination of existing simple mean and covariance functions in order to capture the observed trend of the bearing degradation as well the irregularities in the data. The resulting integrated GPR model provides an excellent fit to the data and improves over the simple GPR models that are based on simple mean and covariance functions. In addition, it achieves a low percentage error prediction of the remaining useful life of slow speed bearings. These findings are robust under varying operating conditions such as loading and speed and can be applied to nonlinear and nonstationary machine response signals useful for effective preventive machine maintenance purposes.
Patient cost-sharing, socioeconomic status, and children's health care utilization.
Nilsson, Anton; Paul, Alexander
2018-05-01
This paper estimates the effect of cost-sharing on the demand for children's and adolescents' use of medical care. We use a large population-wide registry dataset including detailed information on contacts with the health care system as well as family income. Two different estimation strategies are used: regression discontinuity design exploiting age thresholds above which fees are charged, and difference-in-differences models exploiting policy changes. We also estimate combined regression discontinuity difference-in-differences models that take into account discontinuities around age thresholds caused by factors other than cost-sharing. We find that when care is free of charge, individuals increase their number of doctor visits by 5-10%. Effects are similar in middle childhood and adolescence, and are driven by those from low-income families. The differences across income groups cannot be explained by other factors that correlate with income, such as maternal education. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Peterson, K. T.; Wulamu, A.
2017-12-01
Water, essential to all living organisms, is one of the Earth's most precious resources. Remote sensing offers an ideal approach to monitor water quality over traditional in-situ techniques that are highly time and resource consuming. Utilizing a multi-scale approach, incorporating data from handheld spectroscopy, UAS based hyperspectal, and satellite multispectral images were collected in coordination with in-situ water quality samples for the two midwestern watersheds. The remote sensing data was modeled and correlated to the in-situ water quality variables including chlorophyll content (Chl), turbidity, and total dissolved solids (TDS) using Normalized Difference Spectral Indices (NDSI) and Partial Least Squares Regression (PLSR). The results of the study supported the original hypothesis that correlating water quality variables with remotely sensed data benefits greatly from the use of more complex modeling and regression techniques such as PLSR. The final results generated from the PLSR analysis resulted in much higher R2 values for all variables when compared to NDSI. The combination of NDSI and PLSR analysis also identified key wavelengths for identification that aligned with previous study's findings. This research displays the advantages and future for complex modeling and machine learning techniques to improve water quality variable estimation from spectral data.
Prediction of pulmonary hypertension in idiopathic pulmonary fibrosis☆
Zisman, David A.; Ross, David J.; Belperio, John A.; Saggar, Rajan; Lynch, Joseph P.; Ardehali, Abbas; Karlamangla, Arun S.
2007-01-01
Summary Background Reliable, noninvasive approaches to the diagnosis of pulmonary hypertension in idiopathic pulmonary fibrosis are needed. We tested the hypothesis that the forced vital capacity to diffusing capacity ratio and room air resting pulse oximetry may be combined to predict mean pulmonary artery pressure (MPAP) in idiopathic pulmonary fibrosis. Methods Sixty-one idiopathic pulmonary fibrosis patients with available right-heart catheterization were studied. We regressed measured MPAP as a continuous variable on pulse oximetry (SpO2) and percent predicted forced vital capacity (FVC) to percent-predicted diffusing capacity ratio (% FVC/% DLco) in a multivariable linear regression model. Results Linear regression generated the following equation: MPAP = −11.9+0.272 × SpO2+0.0659 × (100−SpO2)2+3.06 × (% FVC/% DLco); adjusted R2 = 0.55, p<0.0001. The sensitivity, specificity, positive predictive and negative predictive value of model-predicted pulmonary hypertension were 71% (95% confidence interval (CI): 50–89%), 81% (95% CI: 68–92%), 71% (95% CI: 51–87%) and 81% (95% CI: 68–94%). Conclusions A pulmonary hypertension predictor based on room air resting pulse oximetry and FVC to diffusing capacity ratio has a relatively high negative predictive value. However, this model will require external validation before it can be used in clinical practice. PMID:17604151
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
NASA Astrophysics Data System (ADS)
Abul Ehsan Bhuiyan, Md; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Quintana-Seguí, Pere; Barella-Ortiz, Anaïs
2018-02-01
This study investigates the use of a nonparametric, tree-based model, quantile regression forests (QRF), for combining multiple global precipitation datasets and characterizing the uncertainty of the combined product. We used the Iberian Peninsula as the study area, with a study period spanning 11 years (2000-2010). Inputs to the QRF model included three satellite precipitation products, CMORPH, PERSIANN, and 3B42 (V7); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset. We calibrated the QRF model for two seasons and two terrain elevation categories and used it to generate ensemble for these conditions. Evaluation of the combined product was based on a high-resolution, ground-reference precipitation dataset (SAFRAN) available at 5 km 1 h-1 resolution. Furthermore, to evaluate relative improvements and the overall impact of the combined product in hydrological response, we used the generated ensemble to force a distributed hydrological model (the SURFEX land surface model and the RAPID river routing scheme) and compared its streamflow simulation results with the corresponding simulations from the individual global precipitation and reference datasets. We concluded that the proposed technique could generate realizations that successfully encapsulate the reference precipitation and provide significant improvement in streamflow simulations, with reduction in systematic and random error on the order of 20-99 and 44-88 %, respectively, when considering the ensemble mean.
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Delle Monache, L.; Alessandrini, S.
2016-12-01
Accuracy of weather forecasts in Northeast U.S. has become very important in recent years, given the serious and devastating effects of extreme weather events. Despite the use of evolved forecasting tools and techniques strengthened by increased super-computing resources, the weather forecasting systems still have their limitations in predicting extreme events. In this study, we examine the combination of analog ensemble and Bayesian regression techniques to improve the prediction of storms that have impacted NE U.S., mostly defined by the occurrence of high wind speeds (i.e. blizzards, winter storms, hurricanes and thunderstorms). The predicted wind speed, wind direction and temperature by two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) are combined using the mentioned techniques, exploring various ways that those variables influence the minimization of the prediction error (systematic and random). This study is focused on retrospective simulations of 146 storms that affected the NE U.S. in the period 2005-2016. In order to evaluate the techniques, leave-one-out cross validation procedure was implemented regarding 145 storms as the training dataset. The analog ensemble method selects a set of past observations that corresponded to the best analogs of the numerical weather prediction and provides a set of ensemble members of the selected observation dataset. The set of ensemble members can then be used in a deterministic or probabilistic way. In the Bayesian regression framework, optimal variances are estimated for the training partition by minimizing the root mean square error and are applied to the out-of-sample storm. The preliminary results indicate a significant improvement in the statistical metrics of 10-m wind speed for 146 storms using both techniques (20-30% bias and error reduction in all observation-model pairs). In this presentation, we discuss the various combinations of atmospheric predictors and techniques and illustrate how the long record of predicted storms is valuable in the improvement of wind speed prediction.
The association between intelligence and lifespan is mostly genetic.
Arden, Rosalind; Luciano, Michelle; Deary, Ian J; Reynolds, Chandra A; Pedersen, Nancy L; Plassman, Brenda L; McGue, Matt; Christensen, Kaare; Visscher, Peter M
2016-02-01
Several studies in the new field of cognitive epidemiology have shown that higher intelligence predicts longer lifespan. This positive correlation might arise from socioeconomic status influencing both intelligence and health; intelligence leading to better health behaviours; and/or some shared genetic factors influencing both intelligence and health. Distinguishing among these hypotheses is crucial for medicine and public health, but can only be accomplished by studying a genetically informative sample. We analysed data from three genetically informative samples containing information on intelligence and mortality: Sample 1, 377 pairs of male veterans from the NAS-NRC US World War II Twin Registry; Sample 2, 246 pairs of twins from the Swedish Twin Registry; and Sample 3, 784 pairs of twins from the Danish Twin Registry. The age at which intelligence was measured differed between the samples. We used three methods of genetic analysis to examine the relationship between intelligence and lifespan: we calculated the proportion of the more intelligent twins who outlived their co-twin; we regressed within-twin-pair lifespan differences on within-twin-pair intelligence differences; and we used the resulting regression coefficients to model the additive genetic covariance. We conducted a meta-analysis of the regression coefficients across the three samples. The combined (and all three individual samples) showed a small positive phenotypic correlation between intelligence and lifespan. In the combined sample observed r = .12 (95% confidence interval .06 to .18). The additive genetic covariance model supported a genetic relationship between intelligence and lifespan. In the combined sample the genetic contribution to the covariance was 95%; in the US study, 84%; in the Swedish study, 86%, and in the Danish study, 85%. The finding of common genetic effects between lifespan and intelligence has important implications for public health, and for those interested in the genetics of intelligence, lifespan or inequalities in health outcomes including lifespan. © The Author 2015; Published by Oxford University Press on behalf of the International Epidemiological Association.
Explaining Match Outcome During The Men’s Basketball Tournament at The Olympic Games
Leicht, Anthony S.; Gómez, Miguel A.; Woods, Carl T.
2017-01-01
In preparation for the Olympics, there is a limited opportunity for coaches and athletes to interact regularly with team performance indicators providing important guidance to coaches for enhanced match success at the elite level. This study examined the relationship between match outcome and team performance indicators during men’s basketball tournaments at the Olympic Games. Twelve team performance indicators were collated from all men’s teams and matches during the basketball tournament of the 2004-2016 Olympic Games (n = 156). Linear and non-linear analyses examined the relationship between match outcome and team performance indicator characteristics; namely, binary logistic regression and a conditional interference (CI) classification tree. The most parsimonious logistic regression model retained ‘assists’, ‘defensive rebounds’, ‘field-goal percentage’, ‘fouls’, ‘fouls against’, ‘steals’ and ‘turnovers’ (delta AIC <0.01; Akaike weight = 0.28) with a classification accuracy of 85.5%. Conversely, four performance indicators were retained with the CI classification tree with an average classification accuracy of 81.4%. However, it was the combination of ‘field-goal percentage’ and ‘defensive rebounds’ that provided the greatest probability of winning (93.2%). Match outcome during the men’s basketball tournaments at the Olympic Games was identified by a unique combination of performance indicators. Despite the average model accuracy being marginally higher for the logistic regression analysis, the CI classification tree offered a greater practical utility for coaches through its resolution of non-linear phenomena to guide team success. Key points A unique combination of team performance indicators explained 93.2% of winning observations in men’s basketball at the Olympics. Monitoring of these team performance indicators may provide coaches with the capability to devise multiple game plans or strategies to enhance their likelihood of winning. Incorporation of machine learning techniques with team performance indicators may provide a valuable and strategic approach to explain patterns within multivariate datasets in sport science. PMID:29238245
The association between intelligence and lifespan is mostly genetic
Arden, Rosalind; Deary, Ian J; Reynolds, Chandra A; Pedersen, Nancy L; Plassman, Brenda L; McGue, Matt; Christensen, Kaare; Visscher, Peter M
2016-01-01
Abstract Background: Several studies in the new field of cognitive epidemiology have shown that higher intelligence predicts longer lifespan. This positive correlation might arise from socioeconomic status influencing both intelligence and health; intelligence leading to better health behaviours; and/or some shared genetic factors influencing both intelligence and health. Distinguishing among these hypotheses is crucial for medicine and public health, but can only be accomplished by studying a genetically informative sample. Methods: We analysed data from three genetically informative samples containing information on intelligence and mortality: Sample 1, 377 pairs of male veterans from the NAS-NRC US World War II Twin Registry; Sample 2, 246 pairs of twins from the Swedish Twin Registry; and Sample 3, 784 pairs of twins from the Danish Twin Registry. The age at which intelligence was measured differed between the samples. We used three methods of genetic analysis to examine the relationship between intelligence and lifespan: we calculated the proportion of the more intelligent twins who outlived their co-twin; we regressed within-twin-pair lifespan differences on within-twin-pair intelligence differences; and we used the resulting regression coefficients to model the additive genetic covariance. We conducted a meta-analysis of the regression coefficients across the three samples. Results: The combined (and all three individual samples) showed a small positive phenotypic correlation between intelligence and lifespan. In the combined sample observed r = .12 (95% confidence interval .06 to .18). The additive genetic covariance model supported a genetic relationship between intelligence and lifespan. In the combined sample the genetic contribution to the covariance was 95%; in the US study, 84%; in the Swedish study, 86%, and in the Danish study, 85%. Conclusions: The finding of common genetic effects between lifespan and intelligence has important implications for public health, and for those interested in the genetics of intelligence, lifespan or inequalities in health outcomes including lifespan. PMID:26213105
Neuberger, Ulf; Kickingereder, Philipp; Helluy, Xavier; Fischer, Manuel; Bendszus, Martin; Heiland, Sabine
2017-12-01
Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, p<0.001) as well as with those measured with the J-difference editing method (adjusted R-squared: 0.908, p<0.001). The regression model with the J-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (p<0.0001). Moreover, with J-difference editing 2HG was discernible down to 1mM, whereas with the PRESS method 2HG values were not discernable below 2mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing. Copyright © 2017. Published by Elsevier GmbH.
The weighted priors approach for combining expert opinions in logistic regression experiments
Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.
2017-04-24
When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less
Differential Diagnosis of Erythmato-Squamous Diseases Using Classification and Regression Tree.
Maghooli, Keivan; Langarizadeh, Mostafa; Shahmoradi, Leila; Habibi-Koolaee, Mahdi; Jebraeily, Mohamad; Bouraghi, Hamid
2016-10-01
Differential diagnosis of Erythmato-Squamous Diseases (ESD) is a major challenge in the field of dermatology. The ESD diseases are placed into six different classes. Data mining is the process for detection of hidden patterns. In the case of ESD, data mining help us to predict the diseases. Different algorithms were developed for this purpose. we aimed to use the Classification and Regression Tree (CART) to predict differential diagnosis of ESD. we used the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology. For this purpose, the dermatology data set from machine learning repository, UCI was obtained. The Clementine 12.0 software from IBM Company was used for modelling. In order to evaluation of the model we calculate the accuracy, sensitivity and specificity of the model. The proposed model had an accuracy of 94.84% (. 24.42) in order to correct prediction of the ESD disease. Results indicated that using of this classifier could be useful. But, it would be strongly recommended that the combination of machine learning methods could be more useful in terms of prediction of ESD.
Eutrophication risk assessment in coastal embayments using simple statistical models.
Arhonditsis, G; Eleftheriadou, M; Karydis, M; Tsirtsis, G
2003-09-01
A statistical methodology is proposed for assessing the risk of eutrophication in marine coastal embayments. The procedure followed was the development of regression models relating the levels of chlorophyll a (Chl) with the concentration of the limiting nutrient--usually nitrogen--and the renewal rate of the systems. The method was applied in the Gulf of Gera, Island of Lesvos, Aegean Sea and a surrogate for renewal rate was created using the Canberra metric as a measure of the resemblance between the Gulf and the oligotrophic waters of the open sea in terms of their physical, chemical and biological properties. The Chl-total dissolved nitrogen-renewal rate regression model was the most significant, accounting for 60% of the variation observed in Chl. Predicted distributions of Chl for various combinations of the independent variables, based on Bayesian analysis of the models, enabled comparison of the outcomes of specific scenarios of interest as well as further analysis of the system dynamics. The present statistical approach can be used as a methodological tool for testing the resilience of coastal ecosystems under alternative managerial schemes and levels of exogenous nutrient loading.
The weighted priors approach for combining expert opinions in logistic regression experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.
When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less
Ertas, Gokhan
2018-07-01
To assess the value of joint evaluation of diffusion tensor imaging (DTI) measures by using logistic regression modelling to detect high GS risk group prostate tumors. Fifty tumors imaged using DTI on a 3 T MRI device were analyzed. Regions of interests focusing on the center of tumor foci and noncancerous tissue on the maps of mean diffusivity (MD) and fractional anisotropy (FA) were used to extract the minimum, the maximum and the mean measures. Measure ratio was computed by dividing tumor measure by noncancerous tissue measure. Logistic regression models were fitted for all possible pair combinations of the measures using 5-fold cross validation. Systematic differences are present for all MD measures and also for all FA measures in distinguishing the high risk tumors [GS ≥ 7(4 + 3)] from the low risk tumors [GS ≤ 7(3 + 4)] (P < 0.05). Smaller value for MD measures and larger value for FA measures indicate the high risk. The models enrolling the measures achieve good fits and good classification performances (R 2 adj = 0.55-0.60, AUC = 0.88-0.91), however the models using the measure ratios perform better (R 2 adj = 0.59-0.75, AUC = 0.88-0.95). The model that employs the ratios of minimum MD and maximum FA accomplishes the highest sensitivity, specificity and accuracy (Se = 77.8%, Sp = 96.9% and Acc = 90.0%). Joint evaluation of MD and FA diffusion tensor imaging measures is valuable to detect high GS risk group peripheral zone prostate tumors. However, use of the ratios of the measures improves the accuracy of the detections substantially. Logistic regression modelling provides a favorable solution for the joint evaluations easily adoptable in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.
The As-Cu-Ni System: A Chemical Thermodynamic Model for Ancient Recycling
NASA Astrophysics Data System (ADS)
Sabatini, Benjamin J.
2015-12-01
This article is the first thermodynamically reasoned ancient metal system assessment intended for use by archaeologists and archaeometallurgists to aid in the interpretation of remelted/recycled copper alloys composed of arsenic and copper, and arsenic, copper, and nickel. These models are meant to fulfill two main purposes: first, to be applied toward the identification of progressive and regressive temporal changes in artifact chemistry that would have occurred due to recycling, and second, to provide thermodynamic insight into why such metal combinations existed in antiquity. Built on well-established thermodynamics, these models were created using a combination of custom-written software and published binary thermodynamic systems data adjusted to within the boundary conditions of 1200°C and 1 atm. Using these parameters, the behavior of each element and their likelihood of loss in the binaries As-Cu, As-Ni, Cu-Ni, and ternary As-Cu-Ni, systems, under assumed ancient furnace conditions, was determined.
Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model
NASA Astrophysics Data System (ADS)
Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi
2017-09-01
Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.
Takagi, Daisuke; Kondo, Naoki; Takada, Misato; Hashimoto, Hideki
2014-11-19
Previous studies have reported that spousal non-smoking has a spillover effect on the partner's cessation. However, discussion is lacking on the factors modifying that association. We examined whether the spillover effect of spousal non-smoking was associated with the couple's educational attainment. We used paired marital data from the Japanese Study on Stratification, Health, Income, and Neighborhood (J-SHINE), which targeted residents aged 25-50 years in four Japanese municipalities. We selected a spouse smoker at the time of marriage (target respondent), and set his/her smoking status change (continued or quit smoking after marriage) as an outcome, regressed on the counterpart's smoking status (continued smoking or non-smoking) and combinations of each couple's educational attainment as explanatory variables using log-binomial regression models (n =1001 targets; 708 men and 293 women). Regression results showed that a counterpart who previously quit smoking or was a never-smoker was associated with the target male spouse's subsequent cessation. However, for women, the association between husband's non-smoking and their own cessation was significant only for couples in which both spouses were highly educated. Our findings suggest that a spouse's smoking status is important for smoking cessation interventions in men. For women, however, a couple's combined educational attainment may matter in the interventions.
Li, Xiaomeng; Fang, Dansi; Cong, Xiaodong; Cao, Gang; Cai, Hao; Cai, Baochang
2012-12-01
A method is described using rapid and sensitive Fourier transform near-infrared spectroscopy combined with high-performance liquid chromatography-diode array detection for the simultaneous identification and determination of four bioactive compounds in crude Radix Scrophulariae samples. Partial least squares regression is selected as the analysis type and multiplicative scatter correction, second derivative, and Savitzky-Golay filter were adopted for the spectral pretreatment. The correlation coefficients (R) of the calibration models were above 0.96 and the root mean square error of predictions were under 0.028. The developed models were applied to unknown samples with satisfactory results. The established method was validated and can be applied to the intrinsic quality control of crude Radix Scrophulariae.
Concentration-response of short-term ozone exposure and hospital admissions for asthma in Texas.
Zu, Ke; Liu, Xiaobin; Shi, Liuhua; Tao, Ge; Loftus, Christine T; Lange, Sabine; Goodman, Julie E
2017-07-01
Short-term exposure to ozone has been associated with asthma hospital admissions (HA) and emergency department (ED) visits, but the shape of the concentration-response (C-R) curve is unclear. We conducted a time series analysis of asthma HAs and ambient ozone concentrations in six metropolitan areas in Texas from 2001 to 2013. Using generalized linear regression models, we estimated the effect of daily 8-hour maximum ozone concentrations on asthma HAs for all ages combined, and for those aged 5-14, 15-64, and 65+years. We fit penalized regression splines to evaluate the shape of the C-R curves. Using a log-linear model, estimated risk per 10ppb increase in average daily 8-hour maximum ozone concentrations was highest for children (relative risk [RR]=1.047, 95% confidence interval [CI]: 1.025-1.069), lower for younger adults (RR=1.018, 95% CI: 1.005-1.032), and null for older adults (RR=1.002, 95% CI: 0.981-1.023). However, penalized spline models demonstrated significant nonlinear C-R relationships for all ages combined, children, and younger adults, indicating the existence of thresholds. We did not observe an increased risk of asthma HAs until average daily 8-hour maximum ozone concentrations exceeded approximately 40ppb. Ozone and asthma HAs are significantly associated with each other; susceptibility to ozone is age-dependent, with children at highest risk. C-R relationships between average daily 8-hour maximum ozone concentrations and asthma HAs are significantly curvilinear for all ages combined, children, and younger adults. These nonlinear relationships, as well as the lack of relationship between average daily 8-hour maximum and peak ozone concentrations, have important implications for assessing risks to human health in regulatory settings. Copyright © 2017. Published by Elsevier Ltd.
Montes, Alejandro; Pazos, Gustavo
2016-02-01
Identifying children at risk of failing the National Developmental Screening Test by combining prevalences of children suspected of having inapparent developmental disorders (IDDs) and associated risk factors (RFs) would allow to save resources. 1. To estimate the prevalence of children suspected of having IDDs. 2. To identify associated RFs. 3. To assess three methods developed based on observed RFs and propose a pre-screening procedure. The National Developmental Screening Test was administered to 60 randomly selected children aged between 2 and 4 years old from a socioeconomically disadvantaged area from Puerto Madryn. Twenty-four biological and socioenvironmental outcome measures were assessed in order to identify potential RFs using bivariate and multivariate analyses. The likelihood of failing the screening test was estimated as follows: 1. a multivariate logistic regression model was developed; 2. a relationship was established between the number of RFs present in each child and the percentage of children who failed the test; 3. these two methods were combined. The prevalence of children suspected of having IDDs was 55.0% (95% confidence interval: 42.4%-67.6%). Six RFs were initially identified using the bivariate approach. Three of them (maternal education, number of health checkups and Z scores for height-for-age, and maternal age) were included in the logistic regression model, which has a greater explanatory power. The third method included in the assessment showed greater sensitivity and specificity (85% and 79%, respectively). The estimated prevalence of children suspected of having IDDs was four times higher than the national standards. Seven RFs were identified. Combining the analysis of risk factor accumulation and a multivariate model provides a firm basis for developing a sensitive, specific and practical pre-screening procedure for socioeconomically disadvantaged areas. Sociedad Argentina de Pediatría.
Mi, Jia; Li, Jie; Zhang, Qinglu; Wang, Xing; Liu, Hongyu; Cao, Yanlu; Liu, Xiaoyan; Sun, Xiao; Shang, Mengmeng; Liu, Qing
2016-01-01
Abstract The purpose of the study was to establish a mathematical model for correlating the combination of ultrasonography and noncontrast helical computerized tomography (NCHCT) with the total energy of Holmium laser lithotripsy. In this study, from March 2013 to February 2014, 180 patients with single urinary calculus were examined using ultrasonography and NCHCT before Holmium laser lithotripsy. The calculus location and size, acoustic shadowing (AS) level, twinkling artifact intensity (TAI), and CT value were all documented. The total energy of lithotripsy (TEL) and the calculus composition were also recorded postoperatively. Data were analyzed using Spearman's rank correlation coefficient, with the SPSS 17.0 software package. Multiple linear regression was also used for further statistical analysis. A significant difference in the TEL was observed between renal calculi and ureteral calculi (r = –0.565, P < 0.001), and there was a strong correlation between the calculus size and the TEL (r = 0.675, P < 0.001). The difference in the TEL between the calculi with and without AS was highly significant (r = 0.325, P < 0.001). The CT value of the calculi was significantly correlated with the TEL (r = 0.386, P < 0.001). A correlation between the TAI and TEL was also observed (r = 0.391, P < 0.001). Multiple linear regression analysis revealed that the location, size, and TAI of the calculi were related to the TEL, and the location and size were statistically significant predictors (adjusted r2 = 0.498, P < 0.001). A mathematical model correlating the combination of ultrasonography and NCHCT with TEL was established; this model may provide a foundation to guide the use of energy in Holmium laser lithotripsy. The TEL can be estimated by the location, size, and TAI of the calculus. PMID:27930563
2013-01-01
Background Cognitive complaints are reported frequently after breast cancer treatments. Their association with neuropsychological (NP) test performance is not well-established. Methods Early-stage, posttreatment breast cancer patients were enrolled in a prospective, longitudinal, cohort study prior to starting endocrine therapy. Evaluation included an NP test battery and self-report questionnaires assessing symptoms, including cognitive complaints. Multivariable regression models assessed associations among cognitive complaints, mood, treatment exposures, and NP test performance. Results One hundred eighty-nine breast cancer patients, aged 21–65 years, completed the evaluation; 23.3% endorsed higher memory complaints and 19.0% reported higher executive function complaints (>1 SD above the mean for healthy control sample). Regression modeling demonstrated a statistically significant association of higher memory complaints with combined chemotherapy and radiation treatments (P = .01), poorer NP verbal memory performance (P = .02), and higher depressive symptoms (P < .001), controlling for age and IQ. For executive functioning complaints, multivariable modeling controlling for age, IQ, and other confounds demonstrated statistically significant associations with better NP visual memory performance (P = .03) and higher depressive symptoms (P < .001), whereas combined chemotherapy and radiation treatment (P = .05) approached statistical significance. Conclusions About one in five post–adjuvant treatment breast cancer patients had elevated memory and/or executive function complaints that were statistically significantly associated with domain-specific NP test performances and depressive symptoms; combined chemotherapy and radiation treatment was also statistically significantly associated with memory complaints. These results and other emerging studies suggest that subjective cognitive complaints in part reflect objective NP performance, although their etiology and biology appear to be multifactorial, motivating further transdisciplinary research. PMID:23606729
Noizet, Odile; Leclerc, Francis; Sadik, Ahmed; Grandbastien, Bruno; Riou, Yvon; Dorkenoo, Aimée; Fourier, Catherine; Cremer, Robin; Leteurtre, Stephane
2005-01-01
Introduction We conducted the present study to determine whether a combination of the mechanical ventilation weaning predictors proposed by the collective Task Force of the American College of Chest Physicians (TF) and weaning endurance indices enhance prediction of weaning success. Method Conducted in a tertiary paediatric intensive care unit at a university hospital, this prospective study included 54 children receiving mechanical ventilation (≥6 hours) who underwent 57 episodes of weaning. We calculated the indices proposed by the TF (spontaneous respiratory rate, paediatric rapid shallow breathing, rapid shallow breathing occlusion pressure [ROP] and maximal inspiratory pressure during an occlusion test [Pimax]) and weaning endurance indices (pressure-time index, tension-time index obtained from P0.1 [TTI1] and from airway pressure [TTI2]) during spontaneous breathing. Performances of each TF index and combinations of them were calculated, and the best single index and combination were identified. Weaning endurance parameters (TTI1 and TTI2) were calculated and the best index was determined using a logistic regression model. Regression coefficients were estimated using the maximum likelihood ratio (LR) method. Hosmer–Lemeshow test was used to estimate goodness-of-fit of the model. An equation was constructed to predict weaning success. Finally, we calculated the performances of combinations of best TF indices and best endurance index. Results The best single TF index was ROP, the best TF combination was represented by the expression (0.66 × ROP) + (0.34 × Pimax), and the best endurance index was the TTI2, although their performance was poor. The best model resulting from the combination of these indices was defined by the following expression: (0.6 × ROP) – (0.1 × Pimax) + (0.5 × TTI2). This integrated index was a good weaning predictor (P < 0.01), with a LR+ of 6.4 and LR+/LR- ratio of 12.5. However, at a threshold value <1.3 it was only predictive of weaning success (LR- = 0.5). Conclusion The proposed combined index, incorporating endurance, was of modest value in predicting weaning outcome. This is the first report of the value of endurance parameters in predicting weaning success in children. Currently, clinical judgement associated with spontaneous breathing trials apparently remain superior. PMID:16356229
Unsteady hovering wake parameters identified from dynamic model tests, part 1
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1977-01-01
The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.
High-Dimensional Sparse Factor Modeling: Applications in Gene Expression Genomics
Carvalho, Carlos M.; Chang, Jeffrey; Lucas, Joseph E.; Nevins, Joseph R.; Wang, Quanli; West, Mike
2010-01-01
We describe studies in molecular profiling and biological pathway analysis that use sparse latent factor and regression models for microarray gene expression data. We discuss breast cancer applications and key aspects of the modeling and computational methodology. Our case studies aim to investigate and characterize heterogeneity of structure related to specific oncogenic pathways, as well as links between aggregate patterns in gene expression profiles and clinical biomarkers. Based on the metaphor of statistically derived “factors” as representing biological “subpathway” structure, we explore the decomposition of fitted sparse factor models into pathway subcomponents and investigate how these components overlay multiple aspects of known biological activity. Our methodology is based on sparsity modeling of multivariate regression, ANOVA, and latent factor models, as well as a class of models that combines all components. Hierarchical sparsity priors address questions of dimension reduction and multiple comparisons, as well as scalability of the methodology. The models include practically relevant non-Gaussian/nonparametric components for latent structure, underlying often quite complex non-Gaussianity in multivariate expression patterns. Model search and fitting are addressed through stochastic simulation and evolutionary stochastic search methods that are exemplified in the oncogenic pathway studies. Supplementary supporting material provides more details of the applications, as well as examples of the use of freely available software tools for implementing the methodology. PMID:21218139
NASA Astrophysics Data System (ADS)
Sondkar, Pravin B.
The severity of combined aerodynamics and power transmission response in high-speed, high power density systems such as a rotorcraft is still a major cause of annoyance in spite of recent advancement in passive, semi-active and active control. With further increase in the capacity and power of this class of machinery systems, the acoustic noise levels are expected to increase even more. To achieve further improvements in sound quality, a more refined understanding of the factors and attributes controlling human perception is needed. In the case of rotorcraft systems, the perceived quality of the interior sound field is a major determining factor of passenger comfort. Traditionally, this sound quality factor is determined by measuring the response of a chosen set of juries who are asked to compare their qualitative reactions to two or more sounds based on their subjective impressions. This type of testing is very time-consuming, costly, often inconsistent, and not useful for practical design purposes. Furthermore, there is no known universal model for sound quality. The primary aim of this research is to achieve significant improvements in quantifying the sound quality of combined aerodynamic and power transmission response in high-speed, high power density machinery systems such as a rotorcraft by applying relevant objective measures related to the spectral characteristics of the sound field. Two models have been proposed in this dissertation research. First, a classical multivariate regression analysis model based on currently known sound quality metrics as well some new metrics derived in this study is presented. Even though the analysis resulted in the best possible multivariate model as a measure of the acoustic noise quality, it lacks incorporation of human judgment mechanism. The regression model can change depending on specific application, nature of the sounds and types of juries used in the study. Also, it predicts only the averaged preference scores and does not explain why two jury members differ in their judgment. To address the above shortcoming of applying regression analysis, a new human judgment model is proposed to further improve the ability to predict the degree of subjective annoyance. The human judgment model involves extraction of subjective attributes and their values using a proposed artificial jury processor. In this approach, a set of ear transfer functions are employed to compute the characteristics of sound pressure waves as perceived subjectively by human. The resulting basilar membrane displacement data from this proposed model is then applied to analyze the attribute values. Using this proposed human judgment model, the human judgment mechanism, which is highly sophisticated, will be examined. Since the human judgment model is essentially based on jury attributes that are not expected to change significantly with application or nature of the sound field, it gives a more common basis to evaluate sound quality. This model also attempts to explain the inter-juror differences in opinion, which is critical in understanding the variability in human response.
Edwards, Rufus D; Smith, Kirk R; Zhang, Junfeng; Ma, Yuqing
2003-01-01
Residential energy use in developing countries has traditionally been associated with combustion devices of poor energy efficiency, which have been shown to produce substantial health-damaging pollution, contributing significantly to the global burden of disease, and greenhouse gas (GHG) emissions. Precision of these estimates in China has been hampered by limited data on stove use and fuel consumption in residences. In addition limited information is available on variability of emissions of pollutants from different stove/fuel combinations in typical use, as measurement of emission factors requires measurement of multiple chemical species in complex burn cycle tests. Such measurements are too costly and time consuming for application in conjunction with national surveys. Emissions of most of the major health-damaging pollutants (HDP) and many of the gases that contribute to GHG emissions from cooking stoves are the result of the significant portion of fuel carbon that is diverted to products of incomplete combustion (PIC) as a result of poor combustion efficiencies. The approximately linear increase in emissions of PIC with decreasing combustion efficiencies allows development of linear models to predict emissions of GHG and HDP intrinsically linked to CO2 and PIC production, and ultimately allows the prediction of global warming contributions from residential stove emissions. A comprehensive emissions database of three burn cycles of 23 typical fuel/stove combinations tested in a simulated village house in China has been used to develop models to predict emissions of HDP and global warming commitment (GWC) from cooking stoves in China, that rely on simple survey information on stove and fuel use that may be incorporated into national surveys. Stepwise regression models predicted 66% of the variance in global warming commitment (CO2, CO, CH4, NOx, TNMHC) per 1 MJ delivered energy due to emissions from these stoves if survey information on fuel type was available. Subsequently if stove type is known, stepwise regression models predicted 73% of the variance. Integrated assessment of policies to change stove or fuel type requires that implications for environmental impacts, energy efficiency, global warming and human exposures to HDP emissions can be evaluated. Frequently, this involves measurement of TSP or CO as the major HDPs. Incorporation of this information into models to predict GWC predicted 79% and 78% of the variance respectively. Clearly, however, the complexity of making multiple measurements in conjunction with a national survey would be both expensive and time consuming. Thus, models to predict HDP using simple survey information, and with measurement of either CO/CO2 or TSP/CO2 to predict emission factors for the other HDP have been derived. Stepwise regression models predicted 65% of the variance in emissions of total suspended particulate as grams of carbon (TSPC) per 1 MJ delivered if survey information on fuel and stove type was available and 74% if the CO/CO2 ratio was measured. Similarly stepwise regression models predicted 76% of the variance in COC emissions per MJ delivered with survey information on stove and fuel type and 85% if the TSPC/CO2 ratio was measured. Ultimately, with international agreements on emissions trading frameworks, similar models based on extensive databases of the fate of fuel carbon during combustion from representative household stoves would provide a mechanism for computing greenhouse credits in the residential sector as part of clean development mechanism frameworks and monitoring compliance to control regimes.
Zhou, Kehua; Schenk, Ronald; Brogan, Michael S
2016-12-01
Electric stimulation (E-stim) has been found to be an effective treatment in improving wound healing rates. However, the wound healing trajectory and its related predictors for complete wound closure (CWC) have not been reported with E-stim treatment. This was a retrospective study. Data on 159 patients treated at an outpatient wound clinic utilizing combined intervention of E-stim and conventional care were included. The Kaplan-Meier healing curve together with linear regression models depicted the percentage of patients with CWC against time. With 100, 112 and 140 days of treatment, the percentages of patients with CWC were 59·12%, 61·01% and 65·41%, respectively. Linear regression models predicted that all patients would achieve CWC by 21·55, 22·26 and 24·80 weeks, respectively. The speed for the increase in the number and percentage of patients with CWC peaked between 50-75 days of treatment. To optimize timely healing, referral to other treatment facilities or change of treatment protocol is warranted around the peak time. With the combined intervention of E-stim and conventional care, positive predictors for CWC included a shorter wound duration at initial evaluation (P = 0·005, OR = 3·10), better compliance with appointments (P = 0·007, OR = 3·38) and the diagnosis of venous leg ulcer (P = 0·001, OR = 3·88). This study provided preliminary data on wound healing trajectory and predictors with combined E-stim and conventional care. E-stim seemed to expedite wound healing; however, further research studies are needed. © 2016 Stichting European Society for Clinical Investigation Journal Foundation.
Comparison of methods for estimating flood magnitudes on small streams in Georgia
Hess, Glen W.; Price, McGlone
1989-01-01
The U.S. Geological Survey has collected flood data for small, natural streams at many sites throughout Georgia during the past 20 years. Flood-frequency relations were developed for these data using four methods: (1) observed (log-Pearson Type III analysis) data, (2) rainfall-runoff model, (3) regional regression equations, and (4) map-model combination. The results of the latter three methods were compared to the analyses of the observed data in order to quantify the differences in the methods and determine if the differences are statistically significant.
Zimmerman, Tammy M.
2006-01-01
The Lake Erie shoreline in Pennsylvania spans nearly 40 miles and is a valuable recreational resource for Erie County. Nearly 7 miles of the Lake Erie shoreline lies within Presque Isle State Park in Erie, Pa. Concentrations of Escherichia coli (E. coli) bacteria at permitted Presque Isle beaches occasionally exceed the single-sample bathing-water standard, resulting in unsafe swimming conditions and closure of the beaches. E. coli concentrations and other water-quality and environmental data collected at Presque Isle Beach 2 during the 2004 and 2005 recreational seasons were used to develop models using tobit regression analyses to predict E. coli concentrations. All variables statistically related to E. coli concentrations were included in the initial regression analyses, and after several iterations, only those explanatory variables that made the models significantly better at predicting E. coli concentrations were included in the final models. Regression models were developed using data from 2004, 2005, and the combined 2-year dataset. Variables in the 2004 model and the combined 2004-2005 model were log10 turbidity, rain weight, wave height (calculated), and wind direction. Variables in the 2005 model were log10 turbidity and wind direction. Explanatory variables not included in the final models were water temperature, streamflow, wind speed, and current speed; model results indicated these variables did not meet significance criteria at the 95-percent confidence level (probabilities were greater than 0.05). The predicted E. coli concentrations produced by the models were used to develop probabilities that concentrations would exceed the single-sample bathing-water standard for E. coli of 235 colonies per 100 milliliters. Analysis of the exceedence probabilities helped determine a threshold probability for each model, chosen such that the correct number of exceedences and nonexceedences was maximized and the number of false positives and false negatives was minimized. Future samples with computed exceedence probabilities higher than the selected threshold probability, as determined by the model, will likely exceed the E. coli standard and a beach advisory or closing may need to be issued; computed exceedence probabilities lower than the threshold probability will likely indicate the standard will not be exceeded. Additional data collected each year can be used to test and possibly improve the model. This study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to issue beach advisories or closings.
Can Emotional and Behavioral Dysregulation in Youth Be Decoded from Functional Neuroimaging?
Portugal, Liana C L; Rosa, Maria João; Rao, Anil; Bebko, Genna; Bertocci, Michele A; Hinze, Amanda K; Bonar, Lisa; Almeida, Jorge R C; Perlman, Susan B; Versace, Amelia; Schirda, Claudiu; Travis, Michael; Gill, Mary Kay; Demeter, Christine; Diwadkar, Vaibhav A; Ciuffetelli, Gary; Rodriguez, Eric; Forbes, Erika E; Sunshine, Jeffrey L; Holland, Scott K; Kowatch, Robert A; Birmaher, Boris; Axelson, David; Horwitz, Sarah M; Arnold, Eugene L; Fristad, Mary A; Youngstrom, Eric A; Findling, Robert L; Pereira, Mirtes; Oliveira, Leticia; Phillips, Mary L; Mourao-Miranda, Janaina
2016-01-01
High comorbidity among pediatric disorders characterized by behavioral and emotional dysregulation poses problems for diagnosis and treatment, and suggests that these disorders may be better conceptualized as dimensions of abnormal behaviors. Furthermore, identifying neuroimaging biomarkers related to dimensional measures of behavior may provide targets to guide individualized treatment. We aimed to use functional neuroimaging and pattern regression techniques to determine whether patterns of brain activity could accurately decode individual-level severity on a dimensional scale measuring behavioural and emotional dysregulation at two different time points. A sample of fifty-seven youth (mean age: 14.5 years; 32 males) was selected from a multi-site study of youth with parent-reported behavioral and emotional dysregulation. Participants performed a block-design reward paradigm during functional Magnetic Resonance Imaging (fMRI). Pattern regression analyses consisted of Relevance Vector Regression (RVR) and two cross-validation strategies implemented in the Pattern Recognition for Neuroimaging toolbox (PRoNTo). Medication was treated as a binary confounding variable. Decoded and actual clinical scores were compared using Pearson's correlation coefficient (r) and mean squared error (MSE) to evaluate the models. Permutation test was applied to estimate significance levels. Relevance Vector Regression identified patterns of neural activity associated with symptoms of behavioral and emotional dysregulation at the initial study screen and close to the fMRI scanning session. The correlation and the mean squared error between actual and decoded symptoms were significant at the initial study screen and close to the fMRI scanning session. However, after controlling for potential medication effects, results remained significant only for decoding symptoms at the initial study screen. Neural regions with the highest contribution to the pattern regression model included cerebellum, sensory-motor and fronto-limbic areas. The combination of pattern regression models and neuroimaging can help to determine the severity of behavioral and emotional dysregulation in youth at different time points.
Time series sightability modeling of animal populations.
ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R
2018-01-01
Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Ruyck, Kim, E-mail: kim.deruyck@UGent.be; Sabbe, Nick; Oberije, Cary
2011-10-01
Purpose: To construct a model for the prediction of acute esophagitis in lung cancer patients receiving chemoradiotherapy by combining clinical data, treatment parameters, and genotyping profile. Patients and Methods: Data were available for 273 lung cancer patients treated with curative chemoradiotherapy. Clinical data included gender, age, World Health Organization performance score, nicotine use, diabetes, chronic disease, tumor type, tumor stage, lymph node stage, tumor location, and medical center. Treatment parameters included chemotherapy, surgery, radiotherapy technique, tumor dose, mean fractionation size, mean and maximal esophageal dose, and overall treatment time. A total of 332 genetic polymorphisms were considered in 112 candidatemore » genes. The predicting model was achieved by lasso logistic regression for predictor selection, followed by classic logistic regression for unbiased estimation of the coefficients. Performance of the model was expressed as the area under the curve of the receiver operating characteristic and as the false-negative rate in the optimal point on the receiver operating characteristic curve. Results: A total of 110 patients (40%) developed acute esophagitis Grade {>=}2 (Common Terminology Criteria for Adverse Events v3.0). The final model contained chemotherapy treatment, lymph node stage, mean esophageal dose, gender, overall treatment time, radiotherapy technique, rs2302535 (EGFR), rs16930129 (ENG), rs1131877 (TRAF3), and rs2230528 (ITGB2). The area under the curve was 0.87, and the false-negative rate was 16%. Conclusion: Prediction of acute esophagitis can be improved by combining clinical, treatment, and genetic factors. A multicomponent prediction model for acute esophagitis with a sensitivity of 84% was constructed with two clinical parameters, four treatment parameters, and four genetic polymorphisms.« less
Wagle, Jørgen; Farner, Lasse; Flekkøy, Kjell; Bruun Wyller, Torgeir; Sandvik, Leiv; Fure, Brynjar; Stensrød, Brynhild; Engedal, Knut
2011-01-01
To identify prognostic factors associated with functional outcome at 13 months in a sample of stroke rehabilitation patients. Specifically, we hypothesized that cognitive functioning early after stroke would predict long-term functional outcome independently of other factors. 163 stroke rehabilitation patients underwent a structured neuropsychological examination 2-3 weeks after hospital admittance, and their functional status was subsequently evaluated 13 months later with the modified Rankin Scale (mRS) as outcome measure. Three predictive models were built using linear regression analyses: a biological model (sociodemographics, apolipoprotein E genotype, prestroke vascular factors, lesion characteristics and neurological stroke-related impairment); a functional model (pre- and early post-stroke cognitive functioning, personal and instrumental activities of daily living, ADL, and depressive symptoms), and a combined model (including significant variables, with p value <0.05, from the biological and functional models). A combined model of 4 variables best predicted long-term functional outcome with explained variance of 49%: neurological impairment (National Institute of Health Stroke Scale; β = 0.402, p < 0.001), age (β = 0.233, p = 0.001), post-stroke cognitive functioning (Repeatable Battery of Neuropsychological Status, RBANS; β = -0.248, p = 0.001) and prestroke personal ADL (Barthel Index; β = -0.217, p = 0.002). Further linear regression analyses of which RBANS indexes and subtests best predicted long-term functional outcome showed that Coding (β = -0.484, p < 0.001) and Figure Copy (β = -0.233, p = 0.002) raw scores at baseline explained 42% of the variance in mRS scores at follow-up. Early post-stroke cognitive functioning as measured by the RBANS is a significant and independent predictor of long-term functional post-stroke outcome. Copyright © 2011 S. Karger AG, Basel.
Lin, Lei; Wang, Qian; Sadek, Adel W
2016-06-01
The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean absolute percentage error (MAPE). Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo
2017-08-01
The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.
Washington, Simon; Haque, Md Mazharul; Oh, Jutaek; Lee, Dongmin
2014-05-01
Hot spot identification (HSID) aims to identify potential sites-roadway segments, intersections, crosswalks, interchanges, ramps, etc.-with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Shuailing; Shao, Qingsong; Lu, Zhonghua; Duan, Chengli; Yi, Haojun; Su, Liyang
2018-02-01
Saffron is an expensive spice. Its primary effective constituents are crocin I and II, and the contents of these compounds directly affect the quality and commercial value of saffron. In this study, near-infrared spectroscopy was combined with chemometric techniques for the determination of crocin I and II in saffron. Partial least squares regression models were built for the quantification of crocin I and II. By comparing different spectral ranges and spectral pretreatment methods (no pretreatment, vector normalization, subtract a straight line, multiplicative scatter correction, minimum-maximum normalization, eliminate the constant offset, first derivative, and second derivative), optimum models were developed. The root mean square error of cross-validation values of the best partial least squares models for crocin I and II were 1.40 and 0.30, respectively. The coefficients of determination for crocin I and II were 93.40 and 96.30, respectively. These results show that near-infrared spectroscopy can be combined with chemometric techniques to determine the contents of crocin I and II in saffron quickly and efficiently.
Age estimation using cortical surface pattern combining thickness with curvatures
Wang, Jieqiong; Li, Wenjing; Miao, Wen; Dai, Dai; Hua, Jing; He, Huiguang
2014-01-01
Brain development and healthy aging have been proved to follow a specific pattern, which, in turn, can be applied to help doctors diagnose mental diseases. In this paper, we design a cortical surface pattern (CSP) combining the cortical thickness with curvatures, which constructs an accurate human age estimation model with relevance vector regression. We test our model with two public databases. One is the IXI database (360 healthy subjects aging from 20 to 82 years old were selected), and the other is the INDI database (303 subjects aging from 7 to 22 years old were selected). The results show that our model can achieve as small as 4.57 years deviation in the IXI database and 1.38 years deviation in the INDI database. Furthermore, we employ this surface pattern to age groups classification, and get a remarkably high accuracy (97.77%) and a significantly high sensitivity/specificity (97.30%/98.10%). These results suggest that our designed CSP combining thickness with curvatures is stable and sensitive to brain development, and it is much more powerful than voxel-based morphometry used in previous methods for age estimation. PMID:24395657
NASA Astrophysics Data System (ADS)
Shao, Yongni; Jiang, Linjun; Zhou, Hong; Pan, Jian; He, Yong
2016-04-01
In our study, the feasibility of using visible/near infrared hyperspectral imaging technology to detect the changes of the internal components of Chlorella pyrenoidosa so as to determine the varieties of pesticides (such as butachlor, atrazine and glyphosate) at three concentrations (0.6 mg/L, 3 mg/L, 15 mg/L) was investigated. Three models (partial least squares discriminant analysis combined with full wavelengths, FW-PLSDA; partial least squares discriminant analysis combined with competitive adaptive reweighted sampling algorithm, CARS-PLSDA; linear discrimination analysis combined with regression coefficients, RC-LDA) were built by the hyperspectral data of Chlorella pyrenoidosa to find which model can produce the most optimal result. The RC-LDA model, which achieved an average correct classification rate of 97.0% was more superior than FW-PLSDA (72.2%) and CARS-PLSDA (84.0%), and it proved that visible/near infrared hyperspectral imaging could be a rapid and reliable technique to identify pesticide varieties. It also proved that microalgae can be a very promising medium to indicate characteristics of pesticides.
Donta, Balaiah; Dasgupta, Anindita; Ghule, Mohan; Battala, Madhusudana; Nair, Saritha; Silverman, Jay G.; Jadhav, Arun; Palaye, Prajakta; Saggurti, Niranjan; Raj, Anita
2015-01-01
Objective Evidence has linked economic hardship with increased intimate partner violence (IPV) perpetration among males. However, less is known about how economic debt or gender norms related to men's roles in relationships or the household, which often underlie IPV perpetration, intersect in or may explain these associations. We assessed the intersection of economic debt, attitudes toward gender norms, and IPV perpetration among married men in India. Methods Data were from the evaluation of a family planning intervention among young married couples (n=1,081) in rural Maharashtra, India. Crude and adjusted logistic regression models for dichotomous outcome variables and linear regression models for continuous outcomes were used to examine debt in relation to husbands' attitudes toward gender-based norms (i.e., beliefs supporting IPV and beliefs regarding male dominance in relationships and the household), as well as sexual and physical IPV perpetration. Results Twenty percent of husbands reported debt. In adjusted linear regression models, debt was associated with husbands' attitudes supportive of IPV (b=0.015, p=0.004) and norms supporting male dominance in relationships and the household (b=0.006, p=0.003). In logistic regression models adjusted for relevant demographics, debt was associated with perpetration of physical IPV (adjusted odds ratio [AOR] = 1.4, 95% confidence interval [CI] 1.1, 1.9) and sexual IPV (AOR=1.6, 95% CI 1.1, 2.1) from husbands. These findings related to debt and relation to IPV were slightly attenuated when further adjusted for men's attitudes toward gender norms. Conclusion Findings suggest the need for combined gender equity and economic promotion interventions to address high levels of debt and related IPV reported among married couples in rural India. PMID:26556938
Reed, Elizabeth; Donta, Balaiah; Dasgupta, Anindita; Ghule, Mohan; Battala, Madhusudana; Nair, Saritha; Silverman, Jay G; Jadhav, Arun; Palaye, Prajakta; Saggurti, Niranjan; Raj, Anita
2015-01-01
Evidence has linked economic hardship with increased intimate partner violence (IPV) perpetration among males. However, less is known about how economic debt or gender norms related to men's roles in relationships or the household, which often underlie IPV perpetration, intersect in or may explain these associations. We assessed the intersection of economic debt, attitudes toward gender norms, and IPV perpetration among married men in India. Data were from the evaluation of a family planning intervention among young married couples (n=1,081) in rural Maharashtra, India. Crude and adjusted logistic regression models for dichotomous outcome variables and linear regression models for continuous outcomes were used to examine debt in relation to husbands' attitudes toward gender-based norms (i.e., beliefs supporting IPV and beliefs regarding male dominance in relationships and the household), as well as sexual and physical IPV perpetration. Twenty percent of husbands reported debt. In adjusted linear regression models, debt was associated with husbands' attitudes supportive of IPV (b=0.015, p=0.004) and norms supporting male dominance in relationships and the household (b=0.006, p=0.003). In logistic regression models adjusted for relevant demographics, debt was associated with perpetration of physical IPV (adjusted odds ratio [AOR] = 1.4, 95% confidence interval [CI] 1.1, 1.9) and sexual IPV (AOR=1.6, 95% CI 1.1, 2.1) from husbands. These findings related to debt and relation to IPV were slightly attenuated when further adjusted for men's attitudes toward gender norms. Findings suggest the need for combined gender equity and economic promotion interventions to address high levels of debt and related IPV reported among married couples in rural India.
Kapellusch, Jay M; Silverstein, Barbara A; Bao, Stephen S; Thiese, Mathew S; Merryweather, Andrew S; Hegmann, Kurt T; Garg, Arun
2018-02-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value for hand activity level (TLV for HAL) have been shown to be associated with prevalence of distal upper-limb musculoskeletal disorders such as carpal tunnel syndrome (CTS). The SI and TLV for HAL disagree on more than half of task exposure classifications. Similarly, time-weighted average (TWA), peak, and typical exposure techniques used to quantity physical exposure from multi-task jobs have shown between-technique agreement ranging from 61% to 93%, depending upon whether the SI or TLV for HAL model was used. This study compared exposure-response relationships between each model-technique combination and prevalence of CTS. Physical exposure data from 1,834 workers (710 with multi-task jobs) were analyzed using the SI and TLV for HAL and the TWA, typical, and peak multi-task job exposure techniques. Additionally, exposure classifications from the SI and TLV for HAL were combined into a single measure and evaluated. Prevalent CTS cases were identified using symptoms and nerve-conduction studies. Mixed effects logistic regression was used to quantify exposure-response relationships between categorized (i.e., low, medium, and high) physical exposure and CTS prevalence for all model-technique combinations, and for multi-task workers, mono-task workers, and all workers combined. Except for TWA TLV for HAL, all model-technique combinations showed monotonic increases in risk of CTS with increased physical exposure. The combined-models approach showed stronger association than the SI or TLV for HAL for multi-task workers. Despite differences in exposure classifications, nearly all model-technique combinations showed exposure-response relationships with prevalence of CTS for the combined sample of mono-task and multi-task workers. Both the TLV for HAL and the SI, with the TWA or typical techniques, appear useful for epidemiological studies and surveillance. However, the utility of TWA, typical, and peak techniques for job design and intervention is dubious.
Rizzitelli, S; Giustetto, P; Faletto, D; Delli Castelli, D; Aime, S; Terreno, E
2016-05-28
The work aimed at developing a novel MRI-based theranostic protocol for improving the anticancer efficacy of a Doxil-like liposomal formulation. The goal was achieved stimulating the intratumor release of the drug from the nanocarrier and favoring its diffusion in the lesion by the sequential application of low-intensity pulsed ultrasound. The protocol was tested on mice bearing a syngeneic breast cancer model. The combination of acoustic waves with different characteristics allowed for: i) the release of the drug and the co-encapsulated MRI agent (Gadoteridol) from the liposomes in the vessels of the tumor region, and ii) the extravasation of the released material, as well as intact liposomes, in the tumor stroma. The MR-T1 contrast enhancement measured in the tumor reported on the delivery and US-triggered release of Doxorubicin. The developed protocol resulted in a marked increase in the intratumor drug concentration that, in turn, led to the complete regression of the lesion. The protocol has a good clinical translatability because all the components of the theranostic agent (Doxorubicin, liposomes, Gadoteridol) are approved for human use. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling and optimization of dough recipe for breadsticks
NASA Astrophysics Data System (ADS)
Krivosheev, A. Yu; Ponomareva, E. I.; Zhuravlev, A. A.; Lukina, S. I.; Alekhina, N. N.
2018-05-01
During the work, the authors studied the combined effect of non-traditional raw materials on indicators of quality breadsticks, mathematical methods of experiment planning were applied. The main factors chosen were the dosages of flaxseed flour and grape seed oil. The output parameters were the swelling factor of the products and their strength. Optimization of the formulation composition of the dough for bread sticks was carried out by experimental- statistical methods. As a result of the experiment, mathematical models were constructed in the form of regression equations, adequately describing the process of studies. The statistical processing of the experimental data was carried out by the criteria of Student, Cochran and Fisher (with a confidence probability of 0.95). A mathematical interpretation of the regression equations was given. Optimization of the formulation of the dough for bread sticks was carried out by the method of uncertain Lagrange multipliers. The rational values of the factors were determined: the dosage of flaxseed flour - 14.22% and grape seed oil - 7.8%, ensuring the production of products with the best combination of swelling ratio and strength. On the basis of the data obtained, a recipe and a method for the production of breadsticks "Idea" were proposed (TU (Russian Technical Specifications) 9117-443-02068106-2017).
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
Pinder, John E; Rowan, David J; Smith, Jim T
2016-02-01
Data from published studies and World Wide Web sources were combined to develop a regression model to predict (137)Cs concentration ratios for saltwater fish. Predictions were developed from 1) numeric trophic levels computed primarily from random resampling of known food items and 2) K concentrations in the saltwater for 65 samplings from 41 different species from both the Atlantic and Pacific Oceans. A number of different models were initially developed and evaluated for accuracy which was assessed as the ratios of independently measured concentration ratios to those predicted by the model. In contrast to freshwater systems, were K concentrations are highly variable and are an important factor in affecting fish concentration ratios, the less variable K concentrations in saltwater were relatively unimportant in affecting concentration ratios. As a result, the simplest model, which used only trophic level as a predictor, had comparable accuracies to more complex models that also included K concentrations. A test of model accuracy involving comparisons of 56 published concentration ratios from 51 species of marine fish to those predicted by the model indicated that 52 of the predicted concentration ratios were within a factor of 2 of the observed concentration ratios. Copyright © 2015 Elsevier Ltd. All rights reserved.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Faber, Anthony C; Coffee, Erin M; Costa, Carlotta; Dastur, Anahita; Ebi, Hiromichi; Hata, Aaron N; Yeo, Alan T; Edelman, Elena J; Song, Youngchul; Tam, Ah Ting; Boisvert, Jessica L; Milano, Randy J; Roper, Jatin; Kodack, David P; Jain, Rakesh K; Corcoran, Ryan B; Rivera, Miguel N; Ramaswamy, Sridhar; Hung, Kenneth E; Benes, Cyril H; Engelman, Jeffrey A
2014-01-01
Colorectal cancers harboring KRAS or BRAF mutations are refractory to current targeted therapies. Using data from a high-throughput drug screen, we have developed a novel therapeutic strategy that targets the apoptotic machinery using the BCL-2 family inhibitor ABT-263 (navitoclax) in combination with a TORC1/2 inhibitor, AZD8055. This combination leads to efficient apoptosis specifically in KRAS- and BRAF-mutant but not wild-type (WT) colorectal cancer cells. This specific susceptibility results from TORC1/2 inhibition leading to suppression of MCL-1 expression in mutant, but not WT, colorectal cancers, leading to abrogation of BIM/MCL-1 complexes. This combination strategy leads to tumor regressions in both KRAS-mutant colorectal cancer xenograft and genetically engineered mouse models of colorectal cancer, but not in the corresponding KRAS-WT colorectal cancer models. These data suggest that the combination of BCL-2/BCL-XL inhibitors with TORC1/2 inhibitors constitutes a promising targeted therapy strategy to treat these recalcitrant cancers.
Time series sightability modeling of animal populations
ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.
2018-01-01
Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
A hybrid PCA-CART-MARS-based prognostic approach of the remaining useful life for aircraft engines.
Sánchez Lasheras, Fernando; García Nieto, Paulino José; de Cos Juez, Francisco Javier; Mayo Bayón, Ricardo; González Suárez, Victor Manuel
2015-03-23
Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines.
A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines
Lasheras, Fernando Sánchez; Nieto, Paulino José García; de Cos Juez, Francisco Javier; Bayón, Ricardo Mayo; Suárez, Victor Manuel González
2015-01-01
Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines. PMID:25806876