Sample records for adaptive regression splines

  1. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  2. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  3. Validation of cross-sectional time series and multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents using doubly labeled water

    USDA-ARS?s Scientific Manuscript database

    Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant cha...

  4. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Treesearch

    Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo

    2009-01-01

    We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...

  5. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  6. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age.

    PubMed

    Wilke, Marko

    2018-02-01

    This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.

  7. PM10 modeling in the Oviedo urban area (Northern Spain) by using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza

    2014-10-01

    The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the multivariate adaptive regression splines (MARS) technique, conclusions of this research work are exposed.

  8. Newer classification and regression tree techniques: Bagging and Random Forests for ecological prediction

    Treesearch

    Anantha M. Prasad; Louis R. Iverson; Andy Liaw; Andy Liaw

    2006-01-01

    We evaluated four statistical models - Regression Tree Analysis (RTA), Bagging Trees (BT), Random Forests (RF), and Multivariate Adaptive Regression Splines (MARS) - for predictive vegetation mapping under current and future climate scenarios according to the Canadian Climate Centre global circulation model.

  9. Multivariate adaptive regression splines analysis to predict biomarkers of spontaneous preterm birth.

    PubMed

    Menon, Ramkumar; Bhat, Geeta; Saade, George R; Spratt, Heidi

    2014-04-01

    To develop classification models of demographic/clinical factors and biomarker data from spontaneous preterm birth in African Americans and Caucasians. Secondary analysis of biomarker data using multivariate adaptive regression splines (MARS), a supervised machine learning algorithm method. Analysis of data on 36 biomarkers from 191 women was reduced by MARS to develop predictive models for preterm birth in African Americans and Caucasians. Maternal plasma, cord plasma collected at admission for preterm or term labor and amniotic fluid at delivery. Data were partitioned into training and testing sets. Variable importance, a relative indicator (0-100%) and area under the receiver operating characteristic curve (AUC) characterized results. Multivariate adaptive regression splines generated models for combined and racially stratified biomarker data. Clinical and demographic data did not contribute to the model. Racial stratification of data produced distinct models in all three compartments. In African Americans maternal plasma samples IL-1RA, TNF-α, angiopoietin 2, TNFRI, IL-5, MIP1α, IL-1β and TGF-α modeled preterm birth (AUC train: 0.98, AUC test: 0.86). In Caucasians TNFR1, ICAM-1 and IL-1RA contributed to the model (AUC train: 0.84, AUC test: 0.68). African Americans cord plasma samples produced IL-12P70, IL-8 (AUC train: 0.82, AUC test: 0.66). Cord plasma in Caucasians modeled IGFII, PDGFBB, TGF-β1 , IL-12P70, and TIMP1 (AUC train: 0.99, AUC test: 0.82). Amniotic fluid in African Americans modeled FasL, TNFRII, RANTES, KGF, IGFI (AUC train: 0.95, AUC test: 0.89) and in Caucasians, TNF-α, MCP3, TGF-β3 , TNFR1 and angiopoietin 2 (AUC train: 0.94 AUC test: 0.79). Multivariate adaptive regression splines models multiple biomarkers associated with preterm birth and demonstrated racial disparity. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.

  10. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  11. Modelling lecturer performance index of private university in Tulungagung by using survival analysis with multivariate adaptive regression spline

    NASA Astrophysics Data System (ADS)

    Hasyim, M.; Prastyo, D. D.

    2018-03-01

    Survival analysis performs relationship between independent variables and survival time as dependent variable. In fact, not all survival data can be recorded completely by any reasons. In such situation, the data is called censored data. Moreover, several model for survival analysis requires assumptions. One of the approaches in survival analysis is nonparametric that gives more relax assumption. In this research, the nonparametric approach that is employed is Multivariate Regression Adaptive Spline (MARS). This study is aimed to measure the performance of private university’s lecturer. The survival time in this study is duration needed by lecturer to obtain their professional certificate. The results show that research activities is a significant factor along with developing courses material, good publication in international or national journal, and activities in research collaboration.

  12. Evaluation of adaptive treatment planning for patients with non-small cell lung cancer

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Siddiqui, Salim M.; Movsas, Benjamin; Chetty, Indrin J.

    2017-06-01

    The purpose of this study was to develop metrics to evaluate uncertainties in deformable dose accumulation for patients with non-small cell lung cancer (NSCLC). Initial treatment plans (primary) and cone-beam CT (CBCT) images were retrospectively processed for seven NSCLC patients, who showed significant tumor regression during the course of treatment. Each plan was developed with IMRT for 2 Gy  ×  33 fractions. A B-spline-based DIR algorithm was used to register weekly CBCT images to a reference image acquired at fraction 21 and the resultant displacement vector fields (DVFs) were then modified using a finite element method (FEM). The doses were calculated on each of these CBCT images and mapped to the reference image using a tri-linear dose interpolation method, based on the B-spline and FEM-generated DVFs. Contours propagated from the planning image were adjusted to the residual tumor and OARs on the reference image to develop a secondary plan. For iso-prescription adaptive plans (relative to initial plans), mean lung dose (MLD) was reduced, on average from 17.3 Gy (initial plan) to 15.2, 14.5 and 14.8 Gy for the plans adapted using the rigid, B-Spline and FEM-based registrations. Similarly, for iso-toxic adaptive plans (considering MLD relative to initial plans) using the rigid, B-Spline and FEM-based registrations, the average doses were 69.9  ±  6.8, 65.7  ±  5.1 and 67.2  ±  5.6 Gy in the initial volume (PTV1), and 81.5  ±  25.8, 77.7  ±  21.6, and 78.9  ±  22.5 Gy in the residual volume (PTV21), respectively. Tumor volume reduction was correlated with dose escalation (for isotoxic plans, correlation coefficient  =  0.92), and with MLD reduction (for iso-fractional plans, correlation coefficient  =  0.85). For the case of the iso-toxic dose escalation, plans adapted with the B-Spline and FEM DVFs differed from the primary plan adapted with rigid registration by 2.8  ±  1.0 Gy and 1.8  ±  0.9 Gy in PTV1, and the mean difference between doses accumulated using the B-spline and FEM DVF’s was 1.1  ±  0.6 Gy. As a dose mapping-induced energy change, energy defect in the tumor volume was 20.8  ±  13.4% and 4.5  ±  2.4% for the B-spline and FEM-based dose accumulations, respectively. The energy defect of the B-Spline-based dose accumulation is significant in the tumor volume and highly correlated to the difference between the B-Spline and FEM-accumulated doses with their correlation coefficient equal to 0.79. Adaptive planning helps escalate target dose and spare normal tissue for patients with NSCLC, but deformable dose accumulation may have a significant loss of energy in regressed tumor volumes when using image intensity-based DIR algorithms. The metric of energy defect is a useful tool for evaluation of adaptive planning accuracy for lung cancer patients.

  13. [Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].

    PubMed

    Vanegas, Jairo; Vásquez, Fabián

    Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. A New Predictive Model of Centerline Segregation in Continuous Cast Steel Slabs by Using Multivariate Adaptive Regression Splines Approach

    PubMed Central

    García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María

    2015-01-01

    The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.

  15. Predicting Potential Changes in Suitable Habitat and Distribution by 2100 for Tree Species of the Eastern United States

    Treesearch

    Louis R Iverson; Anantha M. Prasad; Mark W. Schwartz; Mark W. Schwartz

    2005-01-01

    We predict current distribution and abundance for tree species present in eastern North America, and subsequently estimate potential suitable habitat for those species under a changed climate with 2 x CO2. We used a series of statistical models (i.e., Regression Tree Analysis (RTA), Multivariate Adaptive Regression Splines (MARS), Bagging Trees (...

  16. Prediction of energy expenditure and physical activity in preschoolers

    USDA-ARS?s Scientific Manuscript database

    Accurate, nonintrusive, and feasible methods are needed to predict energy expenditure (EE) and physical activity (PA) levels in preschoolers. Herein, we validated cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on accelerometry and heart rate (HR) ...

  17. A New Approach of Juvenile Age Estimation using Measurements of the Ilium and Multivariate Adaptive Regression Splines (MARS) Models for Better Age Prediction.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal

    2017-01-01

    Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.

  18. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  19. Integrating Growth Variability of the Ilium, Fifth Lumbar Vertebra, and Clavicle with Multivariate Adaptive Regression Splines Models for Subadult Age Estimation.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Telmon, Norbert; Chaumoitre, Kathia; Adalian, Pascal

    2018-05-31

    Subadult age estimation should rely on sampling and statistical protocols capturing development variability for more accurate age estimates. In this perspective, measurements were taken on the fifth lumbar vertebrae and/or clavicles of 534 French males and females aged 0-19 years and the ilia of 244 males and females aged 0-12 years. These variables were fitted in nonparametric multivariate adaptive regression splines (MARS) models with 95% prediction intervals (PIs) of age. The models were tested on two independent samples from Marseille and the Luis Lopes reference collection from Lisbon. Models using ilium width and module, maximum clavicle length, and lateral vertebral body heights were more than 92% accurate. Precision was lower for postpubertal individuals. Integrating punctual nonlinearities of the relationship between age and the variables and dynamic prediction intervals incorporated the normal increase in interindividual growth variability (heteroscedasticity of variance) with age for more biologically accurate predictions. © 2018 American Academy of Forensic Sciences.

  20. Modelling daily dissolved oxygen concentration using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    NASA Astrophysics Data System (ADS)

    Heddam, Salim; Kisi, Ozgur

    2018-04-01

    In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.

  1. Analyzing degradation data with a random effects spline regression model

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-03-17

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  2. Analyzing degradation data with a random effects spline regression model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  3. Prediction of energy expenditure from heart rate and accelerometry in children and adolescents using multivariate adaptive regression splines modeling

    USDA-ARS?s Scientific Manuscript database

    Free-living measurements of 24-h total energy expenditure (TEE) and activity energy expenditure (AEE) are required to better understand the metabolic, physiological, behavioral, and environmental factors affecting energy balance and contributing to the global epidemic of childhood obesity. The spec...

  4. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  5. Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines

    NASA Astrophysics Data System (ADS)

    Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2016-10-01

    Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th

  6. Adaptive radiotherapy for NSCLC patients: utilizing the principle of energy conservation to evaluate dose mapping operations

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Chetty, Indrin J.

    2017-06-01

    Tumor regression during the course of fractionated radiotherapy confounds the ability to accurately estimate the total dose delivered to tumor targets. Here we present a new criterion to improve the accuracy of image intensity-based dose mapping operations for adaptive radiotherapy for patients with non-small cell lung cancer (NSCLC). Six NSCLC patients were retrospectively investigated in this study. An image intensity-based B-spline registration algorithm was used for deformable image registration (DIR) of weekly CBCT images to a reference image. The resultant displacement vector fields were employed to map the doses calculated on weekly images to the reference image. The concept of energy conservation was introduced as a criterion to evaluate the accuracy of the dose mapping operations. A finite element method (FEM)-based mechanical model was implemented to improve the performance of the B-Spline-based registration algorithm in regions involving tumor regression. For the six patients, deformed tumor volumes changed by 21.2  ±  15.0% and 4.1  ±  3.7% on average for the B-Spline and the FEM-based registrations performed from fraction 1 to fraction 21, respectively. The energy deposited in the gross tumor volume (GTV) was 0.66 Joules (J) per fraction on average. The energy derived from the fractional dose reconstructed by the B-spline and FEM-based DIR algorithms in the deformed GTV’s was 0.51 J and 0.64 J, respectively. Based on landmark comparisons for the 6 patients, mean error for the FEM-based DIR algorithm was 2.5  ±  1.9 mm. The cross-correlation coefficient between the landmark-measured displacement error and the loss of radiation energy was  -0.16 for the FEM-based algorithm. To avoid uncertainties in measuring distorted landmarks, the B-Spline-based registrations were compared to the FEM registrations, and their displacement differences equal 4.2  ±  4.7 mm on average. The displacement differences were correlated to their relative loss of radiation energy with a cross-correlation coefficient equal to 0.68. Based on the principle of energy conservation, the FEM-based mechanical model has a better performance than the B-Spline-based DIR algorithm. It is recommended that the principle of energy conservation be incorporated into a comprehensive QA protocol for adaptive radiotherapy.

  7. TU-AB-202-07: A Novel Method for Registration of Mid-Treatment PET/CT Images Under Conditions of Tumor Regression for Patients with Locally Advanced Lung Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharifi, Hoda; Department of Physics, Oakland University, Rochester, MI; Zhang, Hong

    Purpose: In PET-guided adaptive radiotherapy (RT), changes in the metabolic activity at individual voxels cannot be derived until the duringtreatment CT images are appropriately registered to pre-treatment CT images. However, deformable image registration (DIR) usually does not preserve tumor volume. This may induce errors when comparing to the target. The aim of this study was to develop a DIR-integrated mechanical modeling technique to track radiation-induced metabolic changes on PET images. Methods: Three patients with non-small cell lung cancer (NSCLC) were treated with adaptive radiotherapy under RTOG 1106. Two PET/CT image sets were acquired 2 weeks before RT and 18 fractionsmore » after the start of treatment. DIR was performed to register the during-RT CT to the pre-RT CT using a B-spline algorithm and the resultant displacements in the region of tumor were remodeled using a hybrid finite element method (FEM). Gross tumor volume (GTV) was delineated on the during-RT PET/CT image sets and deformed using the 3D deformation vector fields generated by the CT-based registrations. Metabolic tumor volume (MTV) was calculated using the pre- and during–RT image set. The quality of the PET mapping was evaluated based on the constancy of the mapped MTV and landmark comparison. Results: The B-spline-based registrations changed MTVs by 7.3%, 4.6% and −5.9% for the 3 patients and the correspondent changes for the hybrid FEM method −2.9%, 1% and 6.3%, respectively. Landmark comparisons were used to evaluate the Rigid, B-Spline, and hybrid FEM registrations with the mean errors of 10.1 ± 1.6 mm, 4.4 ± 0.4 mm, and 3.6 ± 0.4 mm for three patients. The hybrid FEM method outperforms the B-Spline-only registration for patients with tumor regression Conclusion: The hybrid FEM modeling technique improves the B-Spline registrations in tumor regions. This technique may help compare metabolic activities between two PET/CT images with regressing tumors. The author gratefully acknowledges the financial support from the National Institutes of Health Grant.« less

  8. Cross-sectional time series and multivariate adaptive regression splines models using accelerometry and heart rate predict energy expenditure of preschoolers

    USDA-ARS?s Scientific Manuscript database

    Prediction equations of energy expenditure (EE) using accelerometers and miniaturized heart rate (HR) monitors have been developed in older children and adults but not in preschool-aged children. Because the relationships between accelerometer counts (ACs), HR, and EE are confounded by growth and ma...

  9. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  10. Study of cyanotoxins presence from experimental cyanobacteria concentrations using a new data mining methodology based on multivariate adaptive regression splines in Trasona reservoir (Northern Spain).

    PubMed

    Garcia Nieto, P J; Sánchez Lasheras, F; de Cos Juez, F J; Alonso Fernández, J R

    2011-11-15

    There is an increasing need to describe cyanobacteria blooms since some cyanobacteria produce toxins, termed cyanotoxins. These latter can be toxic and dangerous to humans as well as other animals and life in general. It must be remarked that the cyanobacteria are reproduced explosively under certain conditions. This results in algae blooms, which can become harmful to other species if the cyanobacteria involved produce cyanotoxins. In this research work, the evolution of cyanotoxins in Trasona reservoir (Principality of Asturias, Northern Spain) was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. The results of the present study are two-fold. On one hand, the importance of the different kind of cyanobacteria over the presence of cyanotoxins in the reservoir is presented through the MARS model and on the other hand a predictive model able to forecast the possible presence of cyanotoxins in a short term was obtained. The agreement of the MARS model with experimental data confirmed the good performance of the same one. Finally, conclusions of this innovative research are exposed. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    NASA Astrophysics Data System (ADS)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  12. Estimating suspended sediment load with multivariate adaptive regression spline, teaching-learning based optimization, and artificial bee colony models.

    PubMed

    Yilmaz, Banu; Aras, Egemen; Nacar, Sinan; Kankal, Murat

    2018-05-23

    The functional life of a dam is often determined by the rate of sediment delivery to its reservoir. Therefore, an accurate estimate of the sediment load in rivers with dams is essential for designing and predicting a dam's useful lifespan. The most credible method is direct measurements of sediment input, but this can be very costly and it cannot always be implemented at all gauging stations. In this study, we tested various regression models to estimate suspended sediment load (SSL) at two gauging stations on the Çoruh River in Turkey, including artificial bee colony (ABC), teaching-learning-based optimization algorithm (TLBO), and multivariate adaptive regression splines (MARS). These models were also compared with one another and with classical regression analyses (CRA). Streamflow values and previously collected data of SSL were used as model inputs with predicted SSL data as output. Two different training and testing dataset configurations were used to reinforce the model accuracy. For the MARS method, the root mean square error value was found to range between 35% and 39% for the test two gauging stations, which was lower than errors for other models. Error values were even lower (7% to 15%) using another dataset. Our results indicate that simultaneous measurements of streamflow with SSL provide the most effective parameter for obtaining accurate predictive models and that MARS is the most accurate model for predicting SSL. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. High-Fidelity Geometric Modeling and Mesh Generation for Mechanics Characterization of Polycrystalline Materials

    DTIC Science & Technology

    2014-10-26

    From the parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow...field-based method [7, 12] to generate adaptive and anisotropic quadrilateral meshes, which can be used as the control mesh for high-order T- spline ...parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow-based

  14. Random regression analyses using B-splines functions to model growth from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Alencar, M M; Albuquerque, L G

    2010-12-01

    The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.

  15. Application of Semiparametric Spline Regression Model in Analyzing Factors that In uence Population Density in Central Java

    NASA Astrophysics Data System (ADS)

    Sumantari, Y. D.; Slamet, I.; Sugiyanto

    2017-06-01

    Semiparametric regression is a statistical analysis method that consists of parametric and nonparametric regression. There are various approach techniques in nonparametric regression. One of the approach techniques is spline. Central Java is one of the most densely populated province in Indonesia. Population density in this province can be modeled by semiparametric regression because it consists of parametric and nonparametric component. Therefore, the purpose of this paper is to determine the factors that in uence population density in Central Java using the semiparametric spline regression model. The result shows that the factors which in uence population density in Central Java is Family Planning (FP) active participants and district minimum wage.

  16. An Investigation of Multivariate Adaptive Regression Splines for Modeling and Analysis of Univariate and Semi-Multivariate Time Series Systems

    DTIC Science & Technology

    1991-09-01

    However, there is no guarantee that this would work; for instance if the data were generated by an ARCH model (Tong, 1990 pp. 116-117) then a simple...Hill, R., Griffiths, W., Lutkepohl, H., and Lee, T., Introduction to the Theory and Practice of Econometrics , 2th ed., Wiley, 1985. Kendall, M., Stuart

  17. Vulnerability of carbon storage in North American boreal forests to wildfires during the 21st century

    Treesearch

    M.S. Balshi; A.D. McGuire; P. Duffy; M. Flannigan; D.W. Kicklighter; J. Melillo

    2009-01-01

    We use a gridded data set developed with a multivariate adaptive regression spline approach to determine how area burned varies each year with changing climatic and fuel moisture conditions. We apply the process-based Terrestrial Ecosystem Model to evaluate the role of future fire on the carbon dynamics of boreal North America in the context of changing atmospheric...

  18. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  19. Multivariate Adaptive Regression Splines (Preprint)

    DTIC Science & Technology

    1990-08-01

    fold cross -validation would take about ten time as long, and MARS is not all that fast to begin with. Friedman has a number of examples showing...standardized mean squared error of prediction (MSEP), the generalized cross validation (GCV), and the number of selected terms (TERMS). In accordance with...and mi= 10 case were almost exclusively spurious cross product terms and terms involving the nuisance variables x6 through xlo. This large number of

  20. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  1. Random regression analyses using B-splines to model growth of Australian Angus cattle

    PubMed Central

    Meyer, Karin

    2005-01-01

    Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011

  2. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    PubMed

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.

  3. Estimation of soil cation exchange capacity using Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS)

    NASA Astrophysics Data System (ADS)

    Emamgolizadeh, S.; Bateni, S. M.; Shahsavani, D.; Ashrafi, T.; Ghorbani, H.

    2015-10-01

    The soil cation exchange capacity (CEC) is one of the main soil chemical properties, which is required in various fields such as environmental and agricultural engineering as well as soil science. In situ measurement of CEC is time consuming and costly. Hence, numerous studies have used traditional regression-based techniques to estimate CEC from more easily measurable soil parameters (e.g., soil texture, organic matter (OM), and pH). However, these models may not be able to adequately capture the complex and highly nonlinear relationship between CEC and its influential soil variables. In this study, Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS) were employed to estimate CEC from more readily measurable soil physical and chemical variables (e.g., OM, clay, and pH) by developing functional relations. The GEP- and MARS-based functional relations were tested at two field sites in Iran. Results showed that GEP and MARS can provide reliable estimates of CEC. Also, it was found that the MARS model (with root-mean-square-error (RMSE) of 0.318 Cmol+ kg-1 and correlation coefficient (R2) of 0.864) generated slightly better results than the GEP model (with RMSE of 0.270 Cmol+ kg-1 and R2 of 0.807). The performance of GEP and MARS models was compared with two existing approaches, namely artificial neural network (ANN) and multiple linear regression (MLR). The comparison indicated that MARS and GEP outperformed the MLP model, but they did not perform as good as ANN. Finally, a sensitivity analysis was conducted to determine the most and the least influential variables affecting CEC. It was found that OM and pH have the most and least significant effect on CEC, respectively.

  4. Split spline screw

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.

  5. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  6. More insights into early brain development through statistical analyses of eigen-structural elements of diffusion tensor imaging using multivariate adaptive regression splines

    PubMed Central

    Chen, Yasheng; Zhu, Hongtu; An, Hongyu; Armao, Diane; Shen, Dinggang; Gilmore, John H.; Lin, Weili

    2013-01-01

    The aim of this study was to characterize the maturational changes of the three eigenvalues (λ1 ≥ λ2 ≥ λ3) of diffusion tensor imaging (DTI) during early postnatal life for more insights into early brain development. In order to overcome the limitations of using presumed growth trajectories for regression analysis, we employed Multivariate Adaptive Regression Splines (MARS) to derive data-driven growth trajectories for the three eigenvalues. We further employed Generalized Estimating Equations (GEE) to carry out statistical inferences on the growth trajectories obtained with MARS. With a total of 71 longitudinal datasets acquired from 29 healthy, full-term pediatric subjects, we found that the growth velocities of the three eigenvalues were highly correlated, but significantly different from each other. This paradox suggested the existence of mechanisms coordinating the maturations of the three eigenvalues even though different physiological origins may be responsible for their temporal evolutions. Furthermore, our results revealed the limitations of using the average of λ2 and λ3 as the radial diffusivity in interpreting DTI findings during early brain development because these two eigenvalues had significantly different growth velocities even in central white matter. In addition, based upon the three eigenvalues, we have documented the growth trajectory differences between central and peripheral white matter, between anterior and posterior limbs of internal capsule, and between inferior and superior longitudinal fasciculus. Taken together, we have demonstrated that more insights into early brain maturation can be gained through analyzing eigen-structural elements of DTI. PMID:23455648

  7. Comparison of random regression models with Legendre polynomials and linear splines for production traits and somatic cell score of Canadian Holstein cows.

    PubMed

    Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G

    2008-09-01

    A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.

  8. A Spline Regression Model for Latent Variables

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.

    2014-01-01

    Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…

  9. Algebraic grid adaptation method using non-uniform rational B-spline surface modeling

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, B. K.

    1992-01-01

    An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.

  10. Transport modeling and multivariate adaptive regression splines for evaluating performance of ASR systems in freshwater aquifers

    NASA Astrophysics Data System (ADS)

    Forghani, Ali; Peralta, Richard C.

    2017-10-01

    The study presents a procedure using solute transport and statistical models to evaluate the performance of aquifer storage and recovery (ASR) systems designed to earn additional water rights in freshwater aquifers. The recovery effectiveness (REN) index quantifies the performance of these ASR systems. REN is the proportion of the injected water that the same ASR well can recapture during subsequent extraction periods. To estimate REN for individual ASR wells, the presented procedure uses finely discretized groundwater flow and contaminant transport modeling. Then, the procedure uses multivariate adaptive regression splines (MARS) analysis to identify the significant variables affecting REN, and to identify the most recovery-effective wells. Achieving REN values close to 100% is the desire of the studied 14-well ASR system operator. This recovery is feasible for most of the ASR wells by extracting three times the injectate volume during the same year as injection. Most of the wells would achieve RENs below 75% if extracting merely the same volume as they injected. In other words, recovering almost all the same water molecules that are injected requires having a pre-existing water right to extract groundwater annually. MARS shows that REN most significantly correlates with groundwater flow velocity, or hydraulic conductivity and hydraulic gradient. MARS results also demonstrate that maximizing REN requires utilizing the wells located in areas with background Darcian groundwater velocities less than 0.03 m/d. The study also highlights the superiority of MARS over regular multiple linear regressions to identify the wells that can provide the maximum REN. This is the first reported application of MARS for evaluating performance of an ASR system in fresh water aquifers.

  11. Practical aspects of estimating energy components in rodents

    PubMed Central

    van Klinken, Jan B.; van den Berg, Sjoerd A. A.; van Dijk, Ko Willems

    2013-01-01

    Recently there has been an increasing interest in exploiting computational and statistical techniques for the purpose of component analysis of indirect calorimetry data. Using these methods it becomes possible to dissect daily energy expenditure into its components and to assess the dynamic response of the resting metabolic rate (RMR) to nutritional and pharmacological manipulations. To perform robust component analysis, however, is not straightforward and typically requires the tuning of parameters and the preprocessing of data. Moreover the degree of accuracy that can be attained by these methods depends on the configuration of the system, which must be properly taken into account when setting up experimental studies. Here, we review the methods of Kalman filtering, linear, and penalized spline regression, and minimal energy expenditure estimation in the context of component analysis and discuss their results on high resolution datasets from mice and rats. In addition, we investigate the effect of the sample time, the accuracy of the activity sensor, and the washout time of the chamber on the estimation accuracy. We found that on the high resolution data there was a strong correlation between the results of Kalman filtering and penalized spline (P-spline) regression, except for the activity respiratory quotient (RQ). For low resolution data the basal metabolic rate (BMR) and resting RQ could still be estimated accurately with P-spline regression, having a strong correlation with the high resolution estimate (R2 > 0.997; sample time of 9 min). In contrast, the thermic effect of food (TEF) and activity related energy expenditure (AEE) were more sensitive to a reduction in the sample rate (R2 > 0.97). In conclusion, for component analysis on data generated by single channel systems with continuous data acquisition both Kalman filtering and P-spline regression can be used, while for low resolution data from multichannel systems P-spline regression gives more robust results. PMID:23641217

  12. Variable Selection for Nonparametric Quantile Regression via Smoothing Spline AN OVA

    PubMed Central

    Lin, Chen-Yen; Bondell, Howard; Zhang, Hao Helen; Zou, Hui

    2014-01-01

    Quantile regression provides a more thorough view of the effect of covariates on a response. Nonparametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, since important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline ANOVA models. The proposed sparse nonparametric quantile regression (SNQR) can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Supplementary materials for this article are available online. PMID:24554792

  13. Examination of influential observations in penalized spline regression

    NASA Astrophysics Data System (ADS)

    Türkan, Semra

    2013-10-01

    In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.

  14. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  15. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation

    PubMed Central

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K.

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348

  16. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    PubMed

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  17. An Adaptive MR-CT Registration Method for MRI-guided Prostate Cancer Radiotherapy

    PubMed Central

    Zhong, Hualiang; Wen, Ning; Gordon, James; Elshaikh, Mohamed A; Movsas, Benjamin; Chetty, Indrin J.

    2015-01-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ/cm3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for development of high-quality MRI-guided radiation therapy. PMID:25775937

  18. An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.

    2015-04-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for the development of high-quality MRI-guided radiation therapy.

  19. Examination of wrist and hip actigraphy using a novel sleep estimation procedure☆

    PubMed Central

    Ray, Meredith A.; Youngstedt, Shawn D.; Zhang, Hongmei; Robb, Sara Wagner; Harmon, Brook E.; Jean-Louis, Girardin; Cai, Bo; Hurley, Thomas G.; Hébert, James R.; Bogan, Richard K.; Burch, James B.

    2014-01-01

    Objective Improving and validating sleep scoring algorithms for actigraphs enhances their usefulness in clinical and research applications. The MTI® device (ActiGraph, Pensacola, FL) had not been previously validated for sleep. The aims were to (1) compare the accuracy of sleep metrics obtained via wrist- and hip-mounted MTI® actigraphs with polysomnographic (PSG) recordings in a sample that included both normal sleepers and individuals with presumed sleep disorders; and (2) develop a novel sleep scoring algorithm using spline regression to improve the correspondence between the actigraphs and PSG. Methods Original actigraphy data were amplified and their pattern was estimated using a penalized spline. The magnitude of amplification and the spline were estimated by minimizing the difference in sleep efficiency between wrist- (hip-) actigraphs and PSG recordings. Sleep measures using both the original and spline-modified actigraphy data were compared to PSG using the following: mean sleep summary measures; Spearman rank-order correlations of summary measures; percent of minute-by-minute agreement; sensitivity and specificity; and Bland–Altman plots. Results The original wrist actigraphy data showed modest correspondence with PSG, and much less correspondence was found between hip actigraphy and PSG. The spline-modified wrist actigraphy produced better approximations of interclass correlations, sensitivity, and mean sleep summary measures relative to PSG than the original wrist actigraphy data. The spline-modified hip actigraphy provided improved correspondence, but sleep measures were still not representative of PSG. Discussion The results indicate that with some refinement, the spline regression method has the potential to improve sleep estimates obtained using wrist actigraphy. PMID:25580202

  20. Modeling of time trends and interactions in vital rates using restricted regression splines.

    PubMed

    Heuer, C

    1997-03-01

    For the analysis of time trends in incidence and mortality rates, the age-period-cohort (apc) model has became a widely accepted method. The considered data are arranged in a two-way table by age group and calendar period, which are mostly subdivided into 5- or 10-year intervals. The disadvantage of this approach is the loss of information by data aggregation and the problems of estimating interactions in the two-way layout without replications. In this article we show how splines can be useful when yearly data, i.e., 1-year age groups and 1-year periods, are given. The estimated spline curves are still smooth and represent yearly changes in the time trends. Further, it is straightforward to include interaction terms by the tensor product of the spline functions. If the data are given in a nonrectangular table, e.g., 5-year age groups and 1-year periods, the period and cohort variables can be parameterized by splines, while the age variable is parameterized as fixed effect levels, which leads to a semiparametric apc model. An important methodological issue in developing the nonparametric and semiparametric models is stability of the estimated spline curve at the boundaries. Here cubic regression splines will be used, which are constrained to be linear in the tails. Another point of importance is the nonidentifiability problem due to the linear dependency of the three time variables. This will be handled by decomposing the basis of each spline by orthogonal projection into constant, linear, and nonlinear terms, as suggested by Holford (1983, Biometrics 39, 311-324) for the traditional apc model. The advantage of using splines for yearly data compared to the traditional approach for aggregated data is the more accurate curve estimation for the nonlinear trend changes and the simple way of modeling interactions between the time variables. The method will be demonstrated with hypothetical data as well as with cancer mortality data.

  1. Isogeometric Collocation: Cost Comparison with Galerkin Methods and Extension to Adaptive Hierarchical NURBS Discretizations (Preprint)

    DTIC Science & Technology

    2013-02-06

    high order and smoothness. Consequently, the use of IGA for col- location suggests itself, since spline functions such as NURBS or T-splines can be...for the development of higher-order accurate time integration schemes due to the convergence of the high modes in the eigenspectrum [46] as well as...flows [19, 20, 49–52]. Due to their maximum smoothness, B-splines exhibit a high resolution power, which allows the representation of a broad range

  2. Utilization of a hybrid finite-element based registration method to quantify heterogeneous tumor response for adaptive treatment for lung cancer patients

    NASA Astrophysics Data System (ADS)

    Sharifi, Hoda; Zhang, Hong; Bagher-Ebadian, Hassan; Lu, Wei; Ajlouni, Munther I.; Jin, Jian-Yue; (Spring Kong, Feng-Ming; Chetty, Indrin J.; Zhong, Hualiang

    2018-03-01

    Tumor response to radiation treatment (RT) can be evaluated from changes in metabolic activity between two positron emission tomography (PET) images. Activity changes at individual voxels in pre-treatment PET images (PET1), however, cannot be derived until their associated PET-CT (CT1) images are appropriately registered to during-treatment PET-CT (CT2) images. This study aimed to investigate the feasibility of using deformable image registration (DIR) techniques to quantify radiation-induced metabolic changes on PET images. Five patients with non-small-cell lung cancer (NSCLC) treated with adaptive radiotherapy were considered. PET-CTs were acquired two weeks before RT and 18 fractions after the start of RT. DIR was performed from CT1 to CT2 using B-Spline and diffeomorphic Demons algorithms. The resultant displacements in the tumor region were then corrected using a hybrid finite element method (FEM). Bitmap masks generated from gross tumor volumes (GTVs) in PET1 were deformed using the four different displacement vector fields (DVFs). The conservation of total lesion glycolysis (TLG) in GTVs was used as a criterion to evaluate the quality of these registrations. The deformed masks were united to form a large mask which was then partitioned into multiple layers from center to border. The averages of SUV changes over all the layers were 1.0  ±  1.3, 1.0  ±  1.2, 0.8  ±  1.3, 1.1  ±  1.5 for the B-Spline, B-Spline  +  FEM, Demons and Demons  +  FEM algorithms, respectively. TLG changes before and after mapping using B-Spline, Demons, hybrid-B-Spline, and hybrid-Demons registrations were 20.2%, 28.3%, 8.7%, and 2.2% on average, respectively. Compared to image intensity-based DIR algorithms, the hybrid FEM modeling technique is better in preserving TLG and could be useful for evaluation of tumor response for patients with regressing tumors.

  3. Groundwater potential mapping using C5.0, random forest, and multivariate adaptive regression spline models in GIS.

    PubMed

    Golkarian, Ali; Naghibi, Seyed Amir; Kalantar, Bahareh; Pradhan, Biswajeet

    2018-02-17

    Ever increasing demand for water resources for different purposes makes it essential to have better understanding and knowledge about water resources. As known, groundwater resources are one of the main water resources especially in countries with arid climatic condition. Thus, this study seeks to provide groundwater potential maps (GPMs) employing new algorithms. Accordingly, this study aims to validate the performance of C5.0, random forest (RF), and multivariate adaptive regression splines (MARS) algorithms for generating GPMs in the eastern part of Mashhad Plain, Iran. For this purpose, a dataset was produced consisting of spring locations as indicator and groundwater-conditioning factors (GCFs) as input. In this research, 13 GCFs were selected including altitude, slope aspect, slope angle, plan curvature, profile curvature, topographic wetness index (TWI), slope length, distance from rivers and faults, rivers and faults density, land use, and lithology. The mentioned dataset was divided into two classes of training and validation with 70 and 30% of the springs, respectively. Then, C5.0, RF, and MARS algorithms were employed using R statistical software, and the final values were transformed into GPMs. Finally, two evaluation criteria including Kappa and area under receiver operating characteristics curve (AUC-ROC) were calculated. According to the findings of this research, MARS had the best performance with AUC-ROC of 84.2%, followed by RF and C5.0 algorithms with AUC-ROC values of 79.7 and 77.3%, respectively. The results indicated that AUC-ROC values for the employed models are more than 70% which shows their acceptable performance. As a conclusion, the produced methodology could be used in other geographical areas. GPMs could be used by water resource managers and related organizations to accelerate and facilitate water resource exploitation.

  4. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-07-01

    Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.

  5. Quality Quandaries: Predicting a Population of Curves

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-12-19

    We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.

  6. Quality Quandaries: Predicting a Population of Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.

  7. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  8. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  9. Multivariate Epi-splines and Evolving Function Identification Problems

    DTIC Science & Technology

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  10. Applying Multivariate Adaptive Splines to Identify Genes With Expressions Varying After Diagnosis in Microarray Experiments.

    PubMed

    Duan, Fenghai; Xu, Ye

    2017-01-01

    To analyze a microarray experiment to identify the genes with expressions varying after the diagnosis of breast cancer. A total of 44 928 probe sets in an Affymetrix microarray data publicly available on Gene Expression Omnibus from 249 patients with breast cancer were analyzed by the nonparametric multivariate adaptive splines. Then, the identified genes with turning points were grouped by K-means clustering, and their network relationship was subsequently analyzed by the Ingenuity Pathway Analysis. In total, 1640 probe sets (genes) were reliably identified to have turning points along with the age at diagnosis in their expression profiling, of which 927 expressed lower after turning points and 713 expressed higher after the turning points. K-means clustered them into 3 groups with turning points centering at 54, 62.5, and 72, respectively. The pathway analysis showed that the identified genes were actively involved in various cancer-related functions or networks. In this article, we applied the nonparametric multivariate adaptive splines method to a publicly available gene expression data and successfully identified genes with expressions varying before and after breast cancer diagnosis.

  11. Ensemble habitat mapping of invasive plant species

    USGS Publications Warehouse

    Stohlgren, T.J.; Ma, P.; Kumar, S.; Rocca, M.; Morisette, J.T.; Jarnevich, C.S.; Benson, N.

    2010-01-01

    Ensemble species distribution models combine the strengths of several species environmental matching models, while minimizing the weakness of any one model. Ensemble models may be particularly useful in risk analysis of recently arrived, harmful invasive species because species may not yet have spread to all suitable habitats, leaving species-environment relationships difficult to determine. We tested five individual models (logistic regression, boosted regression trees, random forest, multivariate adaptive regression splines (MARS), and maximum entropy model or Maxent) and ensemble modeling for selected nonnative plant species in Yellowstone and Grand Teton National Parks, Wyoming; Sequoia and Kings Canyon National Parks, California, and areas of interior Alaska. The models are based on field data provided by the park staffs, combined with topographic, climatic, and vegetation predictors derived from satellite data. For the four invasive plant species tested, ensemble models were the only models that ranked in the top three models for both field validation and test data. Ensemble models may be more robust than individual species-environment matching models for risk analysis. ?? 2010 Society for Risk Analysis.

  12. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    NASA Astrophysics Data System (ADS)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  13. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  14. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  15. Spline-based procedures for dose-finding studies with active control

    PubMed Central

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-01

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931

  16. Random regression analyses using B-spline functions to model growth of Nellore cattle.

    PubMed

    Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G

    2012-02-01

    The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.

  17. Isogeometric Bézier dual mortaring: Refineable higher-order spline dual bases and weakly continuous geometry

    NASA Astrophysics Data System (ADS)

    Zou, Z.; Scott, M. A.; Borden, M. J.; Thomas, D. C.; Dornisch, W.; Brivadis, E.

    2018-05-01

    In this paper we develop the isogeometric B\\'ezier dual mortar method. It is based on B\\'ezier extraction and projection and is applicable to any spline space which can be represented in B\\'ezier form (i.e., NURBS, T-splines, LR-splines, etc.). The approach weakly enforces the continuity of the solution at patch interfaces and the error can be adaptively controlled by leveraging the refineability of the underlying dual spline basis without introducing any additional degrees of freedom. We also develop weakly continuous geometry as a particular application of isogeometric B\\'ezier dual mortaring. Weakly continuous geometry is a geometry description where the weak continuity constraints are built into properly modified B\\'ezier extraction operators. As a result, multi-patch models can be processed in a solver directly without having to employ a mortaring solution strategy. We demonstrate the utility of the approach on several challenging benchmark problems. Keywords: Mortar methods, Isogeometric analysis, B\\'ezier extraction, B\\'ezier projection

  18. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  19. Median regression spline modeling of longitudinal FEV1 measurements in cystic fibrosis (CF) and chronic obstructive pulmonary disease (COPD) patients.

    PubMed

    Conrad, Douglas J; Bailey, Barbara A; Hardie, Jon A; Bakke, Per S; Eagan, Tomas M L; Aarli, Bernt B

    2017-01-01

    Clinical phenotyping, therapeutic investigations as well as genomic, airway secretion metabolomic and metagenomic investigations can benefit from robust, nonlinear modeling of FEV1 in individual subjects. We demonstrate the utility of measuring FEV1 dynamics in representative cystic fibrosis (CF) and chronic obstructive pulmonary disease (COPD) populations. Individual FEV1 data from CF and COPD subjects were modeled by estimating median regression splines and their predicted first and second derivatives. Classes were created from variables that capture the dynamics of these curves in both cohorts. Nine FEV1 dynamic variables were identified from the splines and their predicted derivatives in individuals with CF (n = 177) and COPD (n = 374). Three FEV1 dynamic classes (i.e. stable, intermediate and hypervariable) were generated and described using these variables from both cohorts. In the CF cohort, the FEV1 hypervariable class (HV) was associated with a clinically unstable, female-dominated phenotypes while stable FEV1 class (S) individuals were highly associated with the male-dominated milder clinical phenotype. In the COPD cohort, associations were found between the FEV1 dynamic classes, the COPD GOLD grades, with exacerbation frequency and symptoms. Nonlinear modeling of FEV1 with splines provides new insights and is useful in characterizing CF and COPD clinical phenotypes.

  20. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  1. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  2. Adaptation of a cubic smoothing spline algortihm for multi-channel data stitching at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C; Adcock, A; Azevedo, S

    2010-12-28

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less

  3. Image Quality Improvement in Adaptive Optics Scanning Laser Ophthalmoscopy Assisted Capillary Visualization Using B-spline-based Elastic Image Registration

    PubMed Central

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796

  4. The extension of the parametrization of the radio source coordinates in geodetic VLBI and its impact on the time series analysis

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2017-07-01

    The radio sources within the most recent celestial reference frame (CRF) catalog ICRF2 are represented by a single, time-invariant coordinate pair. The datum sources were chosen mainly according to certain statistical properties of their position time series. Yet, such statistics are not applicable unconditionally, and also ambiguous. However, ignoring systematics in the source positions of the datum sources inevitably leads to a degradation of the quality of the frame and, therefore, also of the derived quantities such as the Earth orientation parameters. One possible approach to overcome these deficiencies is to extend the parametrization of the source positions, similarly to what is done for the station positions. We decided to use the multivariate adaptive regression splines algorithm to parametrize the source coordinates. It allows a great deal of automation, by combining recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and, thus, the best number of polynomial pieces to fit the data autonomously. With that we can correct the ICRF2 a priori coordinates for our analysis and eliminate the systematics in the position estimates. This allows us to introduce also special handling sources into the datum definition, leading to on average 30 % more sources in the datum. We find that not only the CPO can be improved by more than 10 % due to the improved geometry, but also the station positions, especially in the early years of VLBI, can benefit greatly.

  5. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  6. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    EPA Science Inventory

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  7. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  8. SU-E-J-89: Deformable Registration Method Using B-TPS in Radiotherapy.

    PubMed

    Xie, Y

    2012-06-01

    A novel deformable registration method for four-dimensional computed tomography (4DCT) images is developed in radiation therapy. The proposed method combines the thin plate spline (TPS) and B-spline together to achieve high accuracy and high efficiency. The method consists of two steps. First, TPS is used as a global registration method to deform large unfit regions in the moving image to match counterpart in the reference image. Then B-spline is used for local registration, the previous deformed moving image is further deformed to match the reference image more accurately. Two clinical CT image sets, including one pair of lung and one pair of liver, are simulated using the proposed algorithm, which results in a tremendous improvement in both run-time and registration quality, compared with the conventional methods solely using either TPS or B-spline. The proposed method can combine the efficiency of TPS and the accuracy of B-spline, performing good adaptively and robust in registration of clinical 4DCT image. © 2012 American Association of Physicists in Medicine.

  9. Semiparametric regression during 2003–2007*

    PubMed Central

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2010-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800

  10. SU-E-J-109: Evaluation of Deformable Accumulated Parotid Doses Using Different Registration Algorithms in Adaptive Head and Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, S; Chinese PLA General Hospital, Beijing, 100853 China; Liu, B

    2015-06-15

    Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated dosesmore » of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of registration accuracy is essential besides the visual assessment by the treating physician. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11105225)« less

  11. Polynomials to model the growth of young bulls in performance tests.

    PubMed

    Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B

    2014-03-01

    The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.

  12. An Adaptive B-Spline Neural Network and Its Application in Terminal Sliding Mode Control for a Mobile Satcom Antenna Inertially Stabilized Platform.

    PubMed

    Zhang, Xiaolei; Zhao, Yan; Guo, Kai; Li, Gaoliang; Deng, Nianmao

    2017-04-28

    The mobile satcom antenna (MSA) enables a moving vehicle to communicate with a geostationary Earth orbit satellite. To realize continuous communication, the MSA should be aligned with the satellite in both sight and polarization all the time. Because of coupling effects, unknown disturbances, sensor noises and unmodeled dynamics existing in the system, the control system should have a strong adaptability. The significant features of terminal sliding mode control method are robustness and finite time convergence, but the robustness is related to the large switching control gain which is determined by uncertain issues and can lead to chattering phenomena. Neural networks can reduce the chattering and approximate nonlinear issues. In this work, a novel B-spline curve-based B-spline neural network (BSNN) is developed. The improved BSNN has the capability of shape changing and self-adaption. In addition, the output of the proposed BSNN is applied to approximate the nonlinear function in the system. The results of simulations and experiments are also compared with those of PID method, non-singularity fast terminal sliding mode (NFTSM) control and radial basis function (RBF) neural network-based NFTSM. It is shown that the proposed method has the best performance, with reliable control precision.

  13. Improving reliability of aggregation, numerical simulation and analysis of complex systems by empirical data

    NASA Astrophysics Data System (ADS)

    Dobronets, Boris S.; Popova, Olga A.

    2018-05-01

    The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.

  14. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    PubMed

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  15. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  16. Flexible Meta-Regression to Assess the Shape of the Benzene–Leukemia Exposure–Response Curve

    PubMed Central

    Vlaanderen, Jelle; Portengen, Lützen; Rothman, Nathaniel; Lan, Qing; Kromhout, Hans; Vermeulen, Roel

    2010-01-01

    Background Previous evaluations of the shape of the benzene–leukemia exposure–response curve (ERC) were based on a single set or on small sets of human occupational studies. Integrating evidence from all available studies that are of sufficient quality combined with flexible meta-regression models is likely to provide better insight into the functional relation between benzene exposure and risk of leukemia. Objectives We used natural splines in a flexible meta-regression method to assess the shape of the benzene–leukemia ERC. Methods We fitted meta-regression models to 30 aggregated risk estimates extracted from nine human observational studies and performed sensitivity analyses to assess the impact of a priori assessed study characteristics on the predicted ERC. Results The natural spline showed a supralinear shape at cumulative exposures less than 100 ppm-years, although this model fitted the data only marginally better than a linear model (p = 0.06). Stratification based on study design and jackknifing indicated that the cohort studies had a considerable impact on the shape of the ERC at high exposure levels (> 100 ppm-years) but that predicted risks for the low exposure range (< 50 ppm-years) were robust. Conclusions Although limited by the small number of studies and the large heterogeneity between studies, the inclusion of all studies of sufficient quality combined with a flexible meta-regression method provides the most comprehensive evaluation of the benzene–leukemia ERC to date. The natural spline based on all data indicates a significantly increased risk of leukemia [relative risk (RR) = 1.14; 95% confidence interval (CI), 1.04–1.26] at an exposure level as low as 10 ppm-years. PMID:20064779

  17. Pan evaporation modeling using six different heuristic computing methods in different climates of China

    NASA Astrophysics Data System (ADS)

    Wang, Lunche; Kisi, Ozgur; Zounemat-Kermani, Mohammad; Li, Hui

    2017-01-01

    Pan evaporation (Ep) plays important roles in agricultural water resources management. One of the basic challenges is modeling Ep using limited climatic parameters because there are a number of factors affecting the evaporation rate. This study investigated the abilities of six different soft computing methods, multi-layer perceptron (MLP), generalized regression neural network (GRNN), fuzzy genetic (FG), least square support vector machine (LSSVM), multivariate adaptive regression spline (MARS), adaptive neuro-fuzzy inference systems with grid partition (ANFIS-GP), and two regression methods, multiple linear regression (MLR) and Stephens and Stewart model (SS) in predicting monthly Ep. Long-term climatic data at various sites crossing a wide range of climates during 1961-2000 are used for model development and validation. The results showed that the models have different accuracies in different climates and the MLP model performed superior to the other models in predicting monthly Ep at most stations using local input combinations (for example, the MAE (mean absolute errors), RMSE (root mean square errors), and determination coefficient (R2) are 0.314 mm/day, 0.405 mm/day and 0.988, respectively for HEB station), while GRNN model performed better in Tibetan Plateau (MAE, RMSE and R2 are 0.459 mm/day, 0.592 mm/day and 0.932, respectively). The accuracies of above models ranked as: MLP, GRNN, LSSVM, FG, ANFIS-GP, MARS and MLR. The overall results indicated that the soft computing techniques generally performed better than the regression methods, but MLR and SS models can be more preferred at some climatic zones instead of complex nonlinear models, for example, the BJ (Beijing), CQ (Chongqing) and HK (Haikou) stations. Therefore, it can be concluded that Ep could be successfully predicted using above models in hydrological modeling studies.

  18. STEP and STEPSPL: Computer programs for aerodynamic model structure determination and parameter estimation

    NASA Technical Reports Server (NTRS)

    Batterson, J. G.

    1986-01-01

    The successful parametric modeling of the aerodynamics for an airplane operating at high angles of attack or sideslip is performed in two phases. First the aerodynamic model structure must be determined and second the associated aerodynamic parameters (stability and control derivatives) must be estimated for that model. The purpose of this paper is to document two versions of a stepwise regression computer program which were developed for the determination of airplane aerodynamic model structure and to provide two examples of their use on computer generated data. References are provided for the application of the programs to real flight data. The two computer programs that are the subject of this report, STEP and STEPSPL, are written in FORTRAN IV (ANSI l966) compatible with a CDC FTN4 compiler. Both programs are adaptations of a standard forward stepwise regression algorithm. The purpose of the adaptation is to facilitate the selection of a adequate mathematical model of the aerodynamic force and moment coefficients of an airplane from flight test data. The major difference between STEP and STEPSPL is in the basis for the model. The basis for the model in STEP is the standard polynomial Taylor's series expansion of the aerodynamic function about some steady-state trim condition. Program STEPSPL utilizes a set of spline basis functions.

  19. High-frequency health data and spline functions.

    PubMed

    Martín-Rodríguez, Gloria; Murillo-Fort, Carlos

    2005-03-30

    Seasonal variations are highly relevant for health service organization. In general, short run movements of medical magnitudes are important features for managers in this field to make adequate decisions. Thus, the analysis of the seasonal pattern in high-frequency health data is an appealing task. The aim of this paper is to propose procedures that allow the analysis of the seasonal component in this kind of data by means of spline functions embedded into a structural model. In the proposed method, useful adaptions of the traditional spline formulation are developed, and the resulting procedures are capable of capturing periodic variations, whether deterministic or stochastic, in a parsimonious way. Finally, these methodological tools are applied to a series of daily emergency service demand in order to capture simultaneous seasonal variations in which periods are different.

  20. Development of a hybrid proximal sensing method for rapid identification of petroleum contaminated soils.

    PubMed

    Chakraborty, Somsubhra; Weindorf, David C; Li, Bin; Ali Aldabaa, Abdalsamad Abdalsatar; Ghosh, Rakesh Kumar; Paul, Sathi; Nasim Ali, Md

    2015-05-01

    Using 108 petroleum contaminated soil samples, this pilot study proposed a new analytical approach of combining visible near-infrared diffuse reflectance spectroscopy (VisNIR DRS) and portable X-ray fluorescence spectrometry (PXRF) for rapid and improved quantification of soil petroleum contamination. Results indicated that an advanced fused model where VisNIR DRS spectra-based penalized spline regression (PSR) was used to predict total petroleum hydrocarbon followed by PXRF elemental data-based random forest regression was used to model the PSR residuals, it outperformed (R(2)=0.78, residual prediction deviation (RPD)=2.19) all other models tested, even producing better generalization than using VisNIR DRS alone (RPD's of 1.64, 1.86, and 1.96 for random forest, penalized spline regression, and partial least squares regression, respectively). Additionally, unsupervised principal component analysis using the PXRF+VisNIR DRS system qualitatively separated contaminated soils from control samples. Fusion of PXRF elemental data and VisNIR derivative spectra produced an optimized model for total petroleum hydrocarbon quantification in soils. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. An overall strategy based on regression models to estimate relative survival and model the effects of prognostic factors in cancer survival studies.

    PubMed

    Remontet, L; Bossard, N; Belot, A; Estève, J

    2007-05-10

    Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.

  2. Modeling the human development index and the percentage of poor people using quantile smoothing splines

    NASA Astrophysics Data System (ADS)

    Mulyani, Sri; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Mean regression is a statistical method to explain the relationship between the response variable and the predictor variable based on the central tendency of the data (mean) of the response variable. The parameter estimation in mean regression (with Ordinary Least Square or OLS) generates a problem if we apply it to the data with a symmetric, fat-tailed, or containing outlier. Hence, an alternative method is necessary to be used to that kind of data, for example quantile regression method. The quantile regression is a robust technique to the outlier. This model can explain the relationship between the response variable and the predictor variable, not only on the central tendency of the data (median) but also on various quantile, in order to obtain complete information about that relationship. In this study, a quantile regression is developed with a nonparametric approach such as smoothing spline. Nonparametric approach is used if the prespecification model is difficult to determine, the relation between two variables follow the unknown function. We will apply that proposed method to poverty data. Here, we want to estimate the Percentage of Poor People as the response variable involving the Human Development Index (HDI) as the predictor variable.

  3. The Norwegian Healthier Goats program--modeling lactation curves using a multilevel cubic spline regression model.

    PubMed

    Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S

    2014-07-01

    In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. Income elasticity of health expenditures in Iran.

    PubMed

    Zare, Hossein; Trujillo, Antonio J; Leidman, Eva; Buttorff, Christine

    2013-09-01

    Because of its policy implications, the income elasticity of health care expenditures is a subject of much debate. Governments may have an interest in subsidizing the care of those with low income. Using more than two decades of data from the Iran Household Expenditure and Income Survey, this article investigates the relationship between income and health care expenditure in urban and rural areas in Iran, a resource rich, upper-middle-income country. We implemented spline and quantile regression techniques to obtain a more robust description of the relationship of interest. This study finds non-uniform effects of income on health expenditures. Although the results show that health care is a necessity for all income brackets, spline regression estimates indicate that the income elasticity is lowest for the poorest Iranians in urban and rural areas. This suggests that they will show low flexibility in medical expenses as income fluctuates. Further, a quantile regression model assessing the effect of income at different level of medical expenditure suggests that households with lower medical expenses are less elastic.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Z; Greskovich, J; Xia, P

    Purpose: To generate virtual phantoms with clinically relevant deformation and use them to objectively evaluate geometric and dosimetric uncertainties of deformable image registration (DIR) algorithms. Methods: Ten lung cancer patients undergoing adaptive 3DCRT planning were selected. For each patient, a pair of planning CT (pCT) and replanning CT (rCT) were used as the basis for virtual phantom generation. Manually adjusted meshes were created for selected ROIs (e.g. PTV, lungs, spinal cord, esophagus, and heart) on pCT and rCT. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF wasmore » used to deform pCT to generate a simulated replanning CT (srCT) that was closely matched to rCT. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten virtual phantoms. The images, ROIs, and doses were mapped from pCT to srCT using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.85 to 0.96 for Demons, from 0.86 to 0.97 for intensity-based, and from 0.76 to 0.95 for B-Spline. The average Hausdorff distances for selected ROIs were from 2.2 to 5.4 mm for Demons, from 2.3 to 6.8 mm for intensity-based, and from 2.4 to 11.4 mm for B-Spline. The average absolute dose errors for selected ROIs were from 0.2 to 0.6 Gy for Demons, from 0.1 to 0.5 Gy for intensity-based, and from 0.5 to 1.5 Gy for B-Spline. Conclusion: Virtual phantoms were modeled after patients with lung cancer and were clinically relevant for adaptive radiotherapy treatment replanning. Virtual phantoms with known DVFs serve as references and can provide a fair comparison when evaluating different DIRs. Demons and intensity-based DIRs were shown to have smaller geometric and dosimetric uncertainties than B-Spline. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; J Greskovich: None; P Xia: received research grants from Philips Healthcare and Siemens Healthcare.« less

  6. Summer and winter habitat suitability of Marco Polo argali in southeastern Tajikistan: A modeling approach.

    PubMed

    Salas, Eric Ariel L; Valdez, Raul; Michel, Stefan

    2017-11-01

    We modeled summer and winter habitat suitability of Marco Polo argali in the Pamir Mountains in southeastern Tajikistan using these statistical algorithms: Generalized Linear Model, Random Forest, Boosted Regression Tree, Maxent, and Multivariate Adaptive Regression Splines. Using sheep occurrence data collected from 2009 to 2015 and a set of selected habitat predictors, we produced summer and winter habitat suitability maps and determined the important habitat suitability predictors for both seasons. Our results demonstrated that argali selected proximity to riparian areas and greenness as the two most relevant variables for summer, and the degree of slope (gentler slopes between 0° to 20°) and Landsat temperature band for winter. The terrain roughness was also among the most important variables in summer and winter models. Aspect was only significant for winter habitat, with argali preferring south-facing mountain slopes. We evaluated various measures of model performance such as the Area Under the Curve (AUC) and the True Skill Statistic (TSS). Comparing the five algorithms, the AUC scored highest for Boosted Regression Tree in summer (AUC = 0.94) and winter model runs (AUC = 0.94). In contrast, Random Forest underperformed in both model runs.

  7. Body Fat Percentage Prediction Using Intelligent Hybrid Approaches

    PubMed Central

    Shao, Yuehjen E.

    2014-01-01

    Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR), artificial neural network (ANN), multivariate adaptive regression splines (MARS), and support vector regression (SVR) techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models. PMID:24723804

  8. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    USGS Publications Warehouse

    Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.

    2009-01-01

    Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional vegetation changes on area burned to account more fully for interactions among fire, climate, and vegetation dynamics. ?? 2009 The Authors Journal compilation ?? 2009 Blackwell Publishing Ltd.

  9. Characterizing vaccine-associated risks using cubic smoothing splines.

    PubMed

    Brookhart, M Alan; Walker, Alexander M; Lu, Yun; Polakowski, Laura; Li, Jie; Paeglow, Corrie; Puenpatom, Tosmai; Izurieta, Hector; Daniel, Gregory W

    2012-11-15

    Estimating risks associated with the use of childhood vaccines is challenging. The authors propose a new approach for studying short-term vaccine-related risks. The method uses a cubic smoothing spline to flexibly estimate the daily risk of an event after vaccination. The predicted incidence rates from the spline regression are then compared with the expected rates under a log-linear trend that excludes the days surrounding vaccination. The 2 models are then used to estimate the excess cumulative incidence attributable to the vaccination during the 42-day period after vaccination. Confidence intervals are obtained using a model-based bootstrap procedure. The method is applied to a study of known effects (positive controls) and expected noneffects (negative controls) of the measles, mumps, and rubella and measles, mumps, rubella, and varicella vaccines among children who are 1 year of age. The splines revealed well-resolved spikes in fever, rash, and adenopathy diagnoses, with the maximum incidence occurring between 9 and 11 days after vaccination. For the negative control outcomes, the spline model yielded a predicted incidence more consistent with the modeled day-specific risks, although there was evidence of increased risk of diagnoses of congenital malformations after vaccination, possibly because of a "provider visit effect." The proposed approach may be useful for vaccine safety surveillance.

  10. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Value of Information Analysis for Time-lapse Seismic Data by Simulation-Regression

    NASA Astrophysics Data System (ADS)

    Dutta, G.; Mukerji, T.; Eidsvik, J.

    2016-12-01

    A novel method to estimate the Value of Information (VOI) of time-lapse seismic data in the context of reservoir development is proposed. VOI is a decision analytic metric quantifying the incremental value that would be created by collecting information prior to making a decision under uncertainty. The VOI has to be computed before collecting the information and can be used to justify its collection. Previous work on estimating the VOI of geophysical data has involved explicit approximation of the posterior distribution of reservoir properties given the data and then evaluating the prospect values for that posterior distribution of reservoir properties. Here, we propose to directly estimate the prospect values given the data by building a statistical relationship between them using regression. Various regression techniques such as Partial Least Squares Regression (PLSR), Multivariate Adaptive Regression Splines (MARS) and k-Nearest Neighbors (k-NN) are used to estimate the VOI, and the results compared. For a univariate Gaussian case, the VOI obtained from simulation-regression has been shown to be close to the analytical solution. Estimating VOI by simulation-regression is much less computationally expensive since the posterior distribution of reservoir properties given each possible dataset need not be modeled and the prospect values need not be evaluated for each such posterior distribution of reservoir properties. This method is flexible, since it does not require rigid model specification of posterior but rather fits conditional expectations non-parametrically from samples of values and data.

  12. Distributed wavefront reconstruction with SABRE for real-time large scale adaptive optics control

    NASA Astrophysics Data System (ADS)

    Brunner, Elisabeth; de Visser, Cornelis C.; Verhaegen, Michel

    2014-08-01

    We present advances on Spline based ABerration REconstruction (SABRE) from (Shack-)Hartmann (SH) wavefront measurements for large-scale adaptive optics systems. SABRE locally models the wavefront with simplex B-spline basis functions on triangular partitions which are defined on the SH subaperture array. This approach allows high accuracy through the possible use of nonlinear basis functions and great adaptability to any wavefront sensor and pupil geometry. The main contribution of this paper is a distributed wavefront reconstruction method, D-SABRE, which is a 2 stage procedure based on decomposing the sensor domain into sub-domains each supporting a local SABRE model. D-SABRE greatly decreases the computational complexity of the method and removes the need for centralized reconstruction while obtaining a reconstruction accuracy for simulated E-ELT turbulences within 1% of the global method's accuracy. Further, a generalization of the methodology is proposed making direct use of SH intensity measurements which leads to an improved accuracy of the reconstruction compared to centroid algorithms using spatial gradients.

  13. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    PubMed

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Estimating Soil Cation Exchange Capacity from Soil Physical and Chemical Properties

    NASA Astrophysics Data System (ADS)

    Bateni, S. M.; Emamgholizadeh, S.; Shahsavani, D.

    2014-12-01

    The soil Cation Exchange Capacity (CEC) is an important soil characteristic that has many applications in soil science and environmental studies. For example, CEC influences soil fertility by controlling the exchange of ions in the soil. Measurement of CEC is costly and difficult. Consequently, several studies attempted to obtain CEC from readily measurable soil physical and chemical properties such as soil pH, organic matter, soil texture, bulk density, and particle size distribution. These studies have often used multiple regression or artificial neural network models. Regression-based models cannot capture the intricate relationship between CEC and soil physical and chemical attributes and provide inaccurate CEC estimates. Although neural network models perform better than regression methods, they act like a black-box and cannot generate an explicit expression for retrieval of CEC from soil properties. In a departure with regression and neural network models, this study uses Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS) to estimate CEC from easily measurable soil variables such as clay, pH, and OM. CEC estimates from GEP and MARS are compared with measurements at two field sites in Iran. Results show that GEP and MARS can estimate CEC accurately. Also, the MARS model performs slightly better than GEP. Finally, a sensitivity test indicates that organic matter and pH have respectively the least and the most significant impact on CEC.

  15. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  16. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    NASA Astrophysics Data System (ADS)

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.

  17. Aerodynamic influence coefficient method using singularity splines.

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1973-01-01

    A new numerical formulation with computed results, is presented. This formulation combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of the loading function methods. The formulation employs a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfies all of the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise (termed 'spline'). Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral.

  18. Neural networks for function approximation in nonlinear control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.; Stengel, Robert F.

    1990-01-01

    Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.

  19. B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms

    NASA Astrophysics Data System (ADS)

    Bueno, G.; Sánchez, S.; Ruiz, M.

    2006-10-01

    Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.

  20. Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.

    PubMed

    Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R

    2018-06-01

    Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.

  1. Sparse modeling of spatial environmental variables associated with asthma

    PubMed Central

    Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.

    2014-01-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437

  2. Sparse modeling of spatial environmental variables associated with asthma.

    PubMed

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. SU-E-J-119: Head-And-Neck Digital Phantoms for Geometric and Dosimetric Uncertainty Evaluation of CT-CBCT Deformable Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Z; Koyfman, S; Xia, P

    2015-06-15

    Purpose: To evaluate geometric and dosimetric uncertainties of CT-CBCT deformable image registration (DIR) algorithms using digital phantoms generated from real patients. Methods: We selected ten H&N cancer patients with adaptive IMRT. For each patient, a planning CT (CT1), a replanning CT (CT2), and a pretreatment CBCT (CBCT1) were used as the basis for digital phantom creation. Manually adjusted meshes were created for selected ROIs (e.g. PTVs, brainstem, spinal cord, mandible, and parotids) on CT1 and CT2. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was applied tomore » CBCT1 to create a simulated mid-treatment CBCT (CBCT2). The CT-CBCT digital phantom consisted of CT1 and CBCT2, which were linked by the reference DVF. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten digital phantoms. The images, ROIs, and volumetric doses were mapped from CT1 to CBCT2 using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.83 to 0.94 for Demons, from 0.82 to 0.95 for B-Spline, and from 0.67 to 0.89 for intensity-based DIR. The average Hausdorff distances for selected ROIs were from 2.4 to 6.2 mm for Demons, from 1.8 to 5.9 mm for B-Spline, and from 2.8 to 11.2 mm for intensity-based DIR. The average absolute dose errors for selected ROIs were from 0.7 to 2.1 Gy for Demons, from 0.7 to 2.9 Gy for B- Spline, and from 1.3 to 4.5 Gy for intensity-based DIR. Conclusion: Using clinically realistic CT-CBCT digital phantoms, Demons and B-Spline were shown to have similar geometric and dosimetric uncertainties while intensity-based DIR had the worst uncertainties. CT-CBCT DIR has the potential to provide accurate CBCT-based dose verification for H&N adaptive radiotherapy. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; S Koyfman: None; P Xia: received research grants from Philips Healthcare and Siemens Healthcare.« less

  4. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data

    PubMed Central

    García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio

    2016-01-01

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed. PMID:28787882

  5. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data.

    PubMed

    García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio

    2016-01-28

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC-MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc . Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC-MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.

  6. A new improved study of cyanotoxins presence from experimental cyanobacteria concentrations in the Trasona reservoir (Northern Spain) using the MARS technique.

    PubMed

    García Nieto, P J; Alonso Fernández, J R; Sánchez Lasheras, F; de Cos Juez, F J; Díaz Muñiz, C

    2012-07-15

    Cyanotoxins, a kind of poisonous substances produced by cyanobacteria, are responsible for health risks in drinking and recreational water uses. The aim of this study is to improve our previous and successful work about cyanotoxins prediction from some experimental cyanobacteria concentrations in the Trasona reservoir (Asturias, Northern Spain) using the multivariate adaptive regression splines (MARS) technique at a local scale. In fact, this new improvement consists of using not only biological variables, but also the physical-chemical ones. As a result, the coefficient of determination has improved from 0.84 to 0.94, that is to say, more accurate predictive calculations and a better approximation to the real problem were obtained. Finally the agreement of the MARS model with experimental data confirmed the good performance. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines

    NASA Astrophysics Data System (ADS)

    Akyurek, Z.; Kuter, S.; Weber, G. W.

    2016-12-01

    Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model was 0.1500. MARS estimates for low FSC values (i.e., FSC<0.3) were better than that of ANN. Both ANN and MARS tended to overestimate medium FSC values (i.e., 0.30.7).

  8. Drought forecasting in eastern Australia using multivariate adaptive regression spline, least square support vector machine and M5Tree model

    NASA Astrophysics Data System (ADS)

    Deo, Ravinesh C.; Kisi, Ozgur; Singh, Vijay P.

    2017-02-01

    Drought forecasting using standardized metrics of rainfall is a core task in hydrology and water resources management. Standardized Precipitation Index (SPI) is a rainfall-based metric that caters for different time-scales at which the drought occurs, and due to its standardization, is well-suited for forecasting drought at different periods in climatically diverse regions. This study advances drought modelling using multivariate adaptive regression splines (MARS), least square support vector machine (LSSVM), and M5Tree models by forecasting SPI in eastern Australia. MARS model incorporated rainfall as mandatory predictor with month (periodicity), Southern Oscillation Index, Pacific Decadal Oscillation Index and Indian Ocean Dipole, ENSO Modoki and Nino 3.0, 3.4 and 4.0 data added gradually. The performance was evaluated with root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (r2). Best MARS model required different input combinations, where rainfall, sea surface temperature and periodicity were used for all stations, but ENSO Modoki and Pacific Decadal Oscillation indices were not required for Bathurst, Collarenebri and Yamba, and the Southern Oscillation Index was not required for Collarenebri. Inclusion of periodicity increased the r2 value by 0.5-8.1% and reduced RMSE by 3.0-178.5%. Comparisons showed that MARS superseded the performance of the other counterparts for three out of five stations with lower MAE by 15.0-73.9% and 7.3-42.2%, respectively. For the other stations, M5Tree was better than MARS/LSSVM with lower MAE by 13.8-13.4% and 25.7-52.2%, respectively, and for Bathurst, LSSVM yielded more accurate result. For droughts identified by SPI ≤ - 0.5, accurate forecasts were attained by MARS/M5Tree for Bathurst, Yamba and Peak Hill, whereas for Collarenebri and Barraba, M5Tree was better than LSSVM/MARS. Seasonal analysis revealed disparate results where MARS/M5Tree was better than LSSVM. The results highlight the importance of periodicity in drought forecasting and also ascertains that model accuracy scales with geographic/seasonal factors due to complexity of drought and its relationship with inputs and data attributes that can affect the evolution of drought events.

  9. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  10. Aerodynamic influence coefficient method using singularity splines

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1974-01-01

    A numerical lifting surface formulation, including computed results for planar wing cases is presented. This formulation, referred to as the vortex spline scheme, combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of loading function methods. The formulation employes a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfied all the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise. Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral. The current formulation uses the elementary horseshoe vortex as the basic singularity and is therefore restricted to linearized potential flow. As part of the study, a non planar development was considered, but the numerical evaluation of the lifting surface concept was restricted to planar configurations. Also, a second order sideslip analysis based on an asymptotic expansion was investigated using the singularity spline formulation.

  11. Short communication: Genetic variation of saturated fatty acids in Holsteins in the Walloon region of Belgium.

    PubMed

    Arnould, V M-R; Hammami, H; Soyeurt, H; Gengler, N

    2010-09-01

    Random regression test-day models using Legendre polynomials are commonly used for the estimation of genetic parameters and genetic evaluation for test-day milk production traits. However, some researchers have reported that these models present some undesirable properties such as the overestimation of variances at the edges of lactation. Describing genetic variation of saturated fatty acids expressed in milk fat might require the testing of different models. Therefore, 3 different functions were used and compared to take into account the lactation curve: (1) Legendre polynomials with the same order as currently applied for genetic model for production traits; 2) linear splines with 10 knots; and 3) linear splines with the same 10 knots reduced to 3 parameters. The criteria used were Akaike's information and Bayesian information criteria, percentage square biases, and log-likelihood function. These criteria indentified Legendre polynomials and linear splines with 10 knots reduced to 3 parameters models as the most useful. Reducing more complex models using eigenvalues seemed appealing because the resulting models are less time demanding and can reduce convergence difficulties, because convergence properties also seemed to be improved. Finally, the results showed that the reduced spline model was very similar to the Legendre polynomials model. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  13. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  14. Sharpening method of satellite thermal image based on the geographical statistical model

    NASA Astrophysics Data System (ADS)

    Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng

    2016-04-01

    To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.

  15. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  16. Numerical solution of the Black-Scholes equation using cubic spline wavelets

    NASA Astrophysics Data System (ADS)

    Černá, Dana

    2016-12-01

    The Black-Scholes equation is used in financial mathematics for computation of market values of options at a given time. We use the θ-scheme for time discretization and an adaptive scheme based on wavelets for discretization on the given time level. Advantages of the proposed method are small number of degrees of freedom, high-order accuracy with respect to variables representing prices and relatively small number of iterations needed to resolve the problem with a desired accuracy. We use several cubic spline wavelet and multi-wavelet bases and discuss their advantages and disadvantages. We also compare an isotropic and anisotropic approach. Numerical experiments are presented for the two-dimensional Black-Scholes equation.

  17. Novel forecasting approaches using combination of machine learning and statistical models for flood susceptibility mapping.

    PubMed

    Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah

    2018-07-01

    In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Local Adaptive Calibration of the GLASS Surface Incident Shortwave Radiation Product Using Smoothing Spline

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Liang, S.; Wang, G.

    2015-12-01

    Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.

  19. Comparing the accuracy and precision of three techniques used for estimating missing landmarks when reconstructing fossil hominin crania.

    PubMed

    Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James

    2009-09-01

    Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.

  20. The influence of the image registration method on the adaptive radiotherapy. A proof of the principle in a selected case of prostate IMRT.

    PubMed

    Berenguer, Roberto; de la Vara, Victoria; Lopez-Honrubia, Veronica; Nuñez, Ana Teresa; Rivera, Miguel; Villas, Maria Victoria; Sabater, Sebastia

    2018-01-01

    To analyse the influence of the image registration method on the adaptive radiotherapy of an IMRT prostate treatment, and to compare the dose accumulation according to 3 different image registration methods with the planned dose. The IMRT prostate patient was CT imaged 3 times throughout his treatment. The prostate, PTV, rectum and bladder were segmented on each CT. A Rigid, a deformable (DIR) B-spline and a DIR with landmarks registration algorithms were employed. The difference between the accumulated doses and planned doses were evaluated by the gamma index. The Dice coefficient and Hausdorff distance was used to evaluate the overlap between volumes, to quantify the quality of the registration. When comparing adaptive vs no adaptive RT, the gamma index calculation showed large differences depending on the image registration method (as much as 87.6% in the case of DIR B-spline). The quality of the registration was evaluated using an index such as the Dice coefficient. This showed that the best result was obtained with DIR with landmarks compared with the rest and it was always above 0.77, reported as a recommended minimum value for prostate studies in a multi-centre review. Apart from showing the importance of the application of an adaptive RT protocol in a particular treatment, this work shows that the election of the registration method is decisive in the result of the adaptive radiotherapy and dose accumulation. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. An adaptive interpolation scheme for molecular potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  2. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  3. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  4. The parametrization of radio source coordinates in VLBI and its impact on the CRF

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-04-01

    Usually celestial radio sources in the celestial reference frame (CRF) catalog are divided in three categories: defining, special handling, and others. The defining sources are those used for the datum realization of the celestial reference frame, i.e. they are included in the No-Net-Rotation (NNR) constraints to maintain the axis orientation of the CRF, and are modeled with one set of totally constant coordinates. At the current level of precision, the choice of the defining sources has a significant effect on the coordinates. For the ICRF2 295 sources were chosen as defining sources, based on their geometrical distribution, statistical properties, and stability. The number of defining sources is a compromise between the reliability of the datum, which increases with the number of sources, and the noise which is introduced by each source. Thus, the optimal number of defining sources is a trade-off between reliability, geometry, and precision. In the ICRF2 only 39 of sources were sorted into the special handling group as they show large fluctuations in their position, therefore they are excluded from the NNR conditions and their positions are normally estimated for each VLBI session instead of as global parameters. All the remaining sources are classified as others. However, a large fraction of these unstable sources show other favorable characteristics, e.g. large flux density (brightness) and a long history of observations. Thus, it would prove advantageous including these sources into the NNR condition. However, the instability of these objects inhibit this. If the coordinate model of these sources would be extended, it would be possible to use these sources for the NNR condition as well. All other sources are placed in the "others" group. This is the largest group of sources, containing those which have not shown any very problematic behavior, but still do not fulfill the requirements for defining sources. Studies show that the behavior of each source can vary dramatically in time. Hence, each source would have to be modeled individually. Considering this, the shear amount of sources, in our study more than 600 are included, sets practical limitations. We decided to use the multivariate adaptive regression splines (MARS) procedure to parametrize the source coordinates, as they allow a great deal of automation as it combines recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and thus the best number of polynomial pieces to fit the data. We investigate linear and cubic splines determined by MARS to "human" determined linear splines and their impact on the CRF. Within this work we try to answer the following questions: How can we find optimal criteria for the definition of the defining and unstable sources? What are the best polynomials for the individual categories? How much can we improve the CRF by extending the parametrization of the sources?

  5. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines

    PubMed Central

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.

    2016-01-01

    Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915

  6. Genetic evaluation of egg production curve in Thai native chickens by random regression and spline models.

    PubMed

    Mookprom, S; Boonkum, W; Kunhareang, S; Siripanya, S; Duangjinda, M

    2017-02-01

    The objective of this research is to investigate appropriate random regression models with various covariance functions, for the genetic evaluation of test-day egg production. Data included 7,884 monthly egg production records from 657 Thai native chickens (Pradu Hang Dam) that were obtained during the first to sixth generation and were born during 2007 to 2014 at the Research and Development Network Center for Animal Breeding (Native Chickens), Khon Kaen University. Average annual and monthly egg productions were 117 ± 41 and 10.20 ± 6.40 eggs, respectively. Nine random regression models were analyzed using the Wilmink function (WM), Koops and Grossman function (KG), Legendre polynomials functions with second, third, and fourth orders (LG2, LG3, LG4), and spline functions with 4, 5, 6, and 8 knots (SP4, SP5, SP6, and SP8). All covariance functions were nested within the same additive genetic and permanent environmental random effects, and the variance components were estimated by Restricted Maximum Likelihood (REML). In model comparisons, mean square error (MSE) and the coefficient of detemination (R 2 ) calculated the goodness of fit; and the correlation between observed and predicted values [Formula: see text] was used to calculate the cross-validated predictive abilities. We found that the covariance functions of SP5, SP6, and SP8 proved appropriate for the genetic evaluation of the egg production curves for Thai native chickens. The estimated heritability of monthly egg production ranged from 0.07 to 0.39, and the highest heritability was found during the first to third months of egg production. In conclusion, the spline functions within monthly egg production can be applied to breeding programs for the improvement of both egg number and persistence of egg production. © 2016 Poultry Science Association Inc.

  7. Random regression models using different functions to model test-day milk yield of Brazilian Holstein cows.

    PubMed

    Bignardi, A B; El Faro, L; Torres Júnior, R A A; Cardoso, V L; Machado, P F; Albuquerque, L G

    2011-10-31

    We analyzed 152,145 test-day records from 7317 first lactations of Holstein cows recorded from 1995 to 2003. Our objective was to model variations in test-day milk yield during the first lactation of Holstein cows by random regression model (RRM), using various functions in order to obtain adequate and parsimonious models for the estimation of genetic parameters. Test-day milk yields were grouped into weekly classes of days in milk, ranging from 1 to 44 weeks. The contemporary groups were defined as herd-test-day. The analyses were performed using a single-trait RRM, including the direct additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. The mean trend of milk yield was modeled with a fourth-order orthogonal Legendre polynomial. The additive genetic and permanent environmental covariance functions were estimated by random regression on two parametric functions, Ali and Schaeffer and Wilmink, and on B-spline functions of days in milk. The covariance components and the genetic parameters were estimated by the restricted maximum likelihood method. Results from RRM parametric and B-spline functions were compared to RRM on Legendre polynomials and with a multi-trait analysis, using the same data set. Heritability estimates presented similar trends during mid-lactation (13 to 31 weeks) and between week 37 and the end of lactation, for all RRM. Heritabilities obtained by multi-trait analysis were of a lower magnitude than those estimated by RRM. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. RRM using B-spline and Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data.

  8. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  9. Quantitative monitoring of sucrose, reducing sugar and total sugar dynamics for phenotyping of water-deficit stress tolerance in rice through spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Das, Bappa; Sahoo, Rabi N.; Pargal, Sourabh; Krishna, Gopal; Verma, Rakesh; Chinnusamy, Viswanathan; Sehgal, Vinay K.; Gupta, Vinod K.; Dash, Sushanta K.; Swain, Padmini

    2018-03-01

    In the present investigation, the changes in sucrose, reducing and total sugar content due to water-deficit stress in rice leaves were modeled using visible, near infrared (VNIR) and shortwave infrared (SWIR) spectroscopy. The objectives of the study were to identify the best vegetation indices and suitable multivariate technique based on precise analysis of hyperspectral data (350 to 2500 nm) and sucrose, reducing sugar and total sugar content measured at different stress levels from 16 different rice genotypes. Spectral data analysis was done to identify suitable spectral indices and models for sucrose estimation. Novel spectral indices in near infrared (NIR) range viz. ratio spectral index (RSI) and normalised difference spectral indices (NDSI) sensitive to sucrose, reducing sugar and total sugar content were identified which were subsequently calibrated and validated. The RSI and NDSI models had R2 values of 0.65, 0.71 and 0.67; RPD values of 1.68, 1.95 and 1.66 for sucrose, reducing sugar and total sugar, respectively for validation dataset. Different multivariate spectral models such as artificial neural network (ANN), multivariate adaptive regression splines (MARS), multiple linear regression (MLR), partial least square regression (PLSR), random forest regression (RFR) and support vector machine regression (SVMR) were also evaluated. The best performing multivariate models for sucrose, reducing sugars and total sugars were found to be, MARS, ANN and MARS, respectively with respect to RPD values of 2.08, 2.44, and 1.93. Results indicated that VNIR and SWIR spectroscopy combined with multivariate calibration can be used as a reliable alternative to conventional methods for measurement of sucrose, reducing sugars and total sugars of rice under water-deficit stress as this technique is fast, economic, and noninvasive.

  10. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  11. A Machine-Learning and Filtering Based Data Assimilation Framework for Geologic Carbon Sequestration Monitoring Optimization

    NASA Astrophysics Data System (ADS)

    Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.

    2017-12-01

    Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.

  12. Predicting Ascospore Release of Monilinia vaccinii-corymbosi of Blueberry with Machine Learning.

    PubMed

    Harteveld, Dalphy O C; Grant, Michael R; Pscheidt, Jay W; Peever, Tobin L

    2017-11-01

    Mummy berry, caused by Monilinia vaccinii-corymbosi, causes economic losses of highbush blueberry in the U.S. Pacific Northwest (PNW). Apothecia develop from mummified berries overwintering on soil surfaces and produce ascospores that infect tissue emerging from floral and vegetative buds. Disease control currently relies on fungicides applied on a calendar basis rather than inoculum availability. To establish a prediction model for ascospore release, apothecial development was tracked in three fields, one in western Oregon and two in northwestern Washington in 2015 and 2016. Air and soil temperature, precipitation, soil moisture, leaf wetness, relative humidity and solar radiation were monitored using in-field weather stations and Washington State University's AgWeatherNet stations. Four modeling approaches were compared: logistic regression, multivariate adaptive regression splines, artificial neural networks, and random forest. A supervised learning approach was used to train the models on two data sets: training (70%) and testing (30%). The importance of environmental factors was calculated for each model separately. Soil temperature, soil moisture, and solar radiation were identified as the most important factors influencing ascospore release. Random forest models, with 78% accuracy, showed the best performance compared with the other models. Results of this research helps PNW blueberry growers to optimize fungicide use and reduce production costs.

  13. An adaptive interpolation scheme for molecular potential energy surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less

  14. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  15. Contour propagation for lung tumor delineation in 4D-CT using tensor-product surface of uniform and non-uniform closed cubic B-splines

    NASA Astrophysics Data System (ADS)

    Jin, Renchao; Liu, Yongchuan; Chen, Mi; Zhang, Sheng; Song, Enmin

    2018-01-01

    A robust contour propagation method is proposed to help physicians delineate lung tumors on all phase images of four-dimensional computed tomography (4D-CT) by only manually delineating the contours on a reference phase. The proposed method models the trajectory surface swept by a contour in a respiratory cycle as a tensor-product surface of two closed cubic B-spline curves: a non-uniform B-spline curve which models the contour and a uniform B-spline curve which models the trajectory of a point on the contour. The surface is treated as a deformable entity, and is optimized from an initial surface by moving its control vertices such that the sum of the intensity similarities between the sampling points on the manually delineated contour and their corresponding ones on different phases is maximized. The initial surface is constructed by fitting the manually delineated contour on the reference phase with a closed B-spline curve. In this way, the proposed method can focus the registration on the contour instead of the entire image to prevent the deformation of the contour from being smoothed by its surrounding tissues, and greatly reduce the time consumption while keeping the accuracy of the contour propagation as well as the temporal consistency of the estimated respiratory motions across all phases in 4D-CT. Eighteen 4D-CT cases with 235 gross tumor volume (GTV) contours on the maximal inhale phase and 209 GTV contours on the maximal exhale phase are manually delineated slice by slice. The maximal inhale phase is used as the reference phase, which provides the initial contours. On the maximal exhale phase, the Jaccard similarity coefficient between the propagated GTV and the manually delineated GTV is 0.881 +/- 0.026, and the Hausdorff distance is 3.07 +/- 1.08 mm. The time for propagating the GTV to all phases is 5.55 +/- 6.21 min. The results are better than those of the fast adaptive stochastic gradient descent B-spline method, the 3D  +  t B-spline method and the diffeomorphic demons method. The proposed method is useful for helping physicians delineate target volumes efficiently and accurately.

  16. Shape selection in Landsat time series: A tool for monitoring forest dynamics

    Treesearch

    Gretchen G. Moisen; Mary C. Meyer; Todd A. Schroeder; Xiyue Liao; Karen G. Schleeweis; Elizabeth A. Freeman; Chris Toney

    2016-01-01

    We present a new methodology for fitting nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral band or index of choice in temporal Landsat data, our method delivers a smoothed rendition of...

  17. Stagnation in Mortality Decline among Elders in the Netherlands

    ERIC Educational Resources Information Center

    Janssen, Fanny; Nusselder, Wilma J.; Looman, Caspar W. N.; Mackenbach, Johan P.; Kunst, Anton E.

    2003-01-01

    Purpose: This study assesses whether the stagnation of old-age (80+) mortality decline observed in The Netherlands in the 1980s continued in the 1990s and determines which factors contributed to this stagnation. Emphasis is on the role of smoking. Design and Methods: Poisson regression analysis with linear splines was applied to total and…

  18. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    PubMed

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  19. Theory, computation, and application of exponential splines

    NASA Technical Reports Server (NTRS)

    Mccartin, B. J.

    1981-01-01

    A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.

  20. Pitfalls in statistical landslide susceptibility modelling

    NASA Astrophysics Data System (ADS)

    Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut

    2010-05-01

    The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible lack of explanatory information in the chosen set of predictor variables, the model residuals need to be checked for spatial auto¬correlation. Therefore, we calculate spline correlograms. In addition to this, we investigate partial dependency plots and bivariate interactions plots considering possible interactions between predictors to improve model interpretation. Aiming at presenting this toolbox for model quality assessment, we investigate the influence of strategies in the construction of training datasets for statistical models on model quality.

  1. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F { X ( t ), t } where F (·,·) is an unknown regression function and X ( t ) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F (·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X ( t ) is a signal from diffusion tensor imaging at position, t , along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  2. WE-A-17A-06: Evaluation of An Automatic Interstitial Catheter Digitization Algorithm That Reduces Treatment Planning Time and Provide Means for Adaptive Re-Planning in HDR Brachytherapy of Gynecologic Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dise, J; Liang, X; Lin, L

    Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions frommore » day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic digitization tool was shown to be accurate compared to manual digitization.« less

  3. Multivariate Spline Algorithms for CAGD

    NASA Technical Reports Server (NTRS)

    Boehm, W.

    1985-01-01

    Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.

  4. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  5. Nonlinear adaptive networks: A little theory, a few applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.D.; Qian, S.; Barnes, C.W.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We than present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series tidal prediction in Venice Lagoon, sonar transient detection, control of nonlinear processes, balancing a double inverted pendulum and design advice for free electron lasers. 26 refs., 23 figs.

  6. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  7. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less

  8. Analysis of oscillatory motion of a light airplane at high values of lift coefficient

    NASA Technical Reports Server (NTRS)

    Batterson, J. G.

    1983-01-01

    A modified stepwise regression is applied to flight data from a light research air-plane operating at high angles at attack. The well-known phenomenon referred to as buckling or porpoising is analyzed and modeled using both power series and spline expansions of the aerodynamic force and moment coefficients associated with the longitudinal equations of motion.

  9. ShapeSelectForest: a new r package for modeling landsat time series

    Treesearch

    Mary Meyer; Xiyue Liao; Gretchen Moisen; Elizabeth Freeman

    2015-01-01

    We present a new R package called ShapeSelectForest recently posted to the Comprehensive R Archival Network. The package was developed to fit nonparametric shape-restricted regression splines to time series of Landsat imagery for the purpose of modeling, mapping, and monitoring annual forest disturbance dynamics over nearly three decades. For each pixel and spectral...

  10. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  11. Spline approximation, Part 1: Basic methodology

    NASA Astrophysics Data System (ADS)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  12. Prediction of wastewater quality indicators at the inflow to the wastewater treatment plant using data mining methods

    NASA Astrophysics Data System (ADS)

    Szeląg, Bartosz; Barbusiński, Krzysztof; Studziński, Jan; Bartkiewicz, Lidia

    2017-11-01

    In the study, models developed using data mining methods are proposed for predicting wastewater quality indicators: biochemical and chemical oxygen demand, total suspended solids, total nitrogen and total phosphorus at the inflow to wastewater treatment plant (WWTP). The models are based on values measured in previous time steps and daily wastewater inflows. Also, independent prediction systems that can be used in case of monitoring devices malfunction are provided. Models of wastewater quality indicators were developed using MARS (multivariate adaptive regression spline) method, artificial neural networks (ANN) of the multilayer perceptron type combined with the classification model (SOM) and cascade neural networks (CNN). The lowest values of absolute and relative errors were obtained using ANN+SOM, whereas the MARS method produced the highest error values. It was shown that for the analysed WWTP it is possible to obtain continuous prediction of selected wastewater quality indicators using the two developed independent prediction systems. Such models can ensure reliable WWTP work when wastewater quality monitoring systems become inoperable, or are under maintenance.

  13. Effects of high summer temperatures on mortality in 50 Spanish cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio; Diaz, Julio

    2014-06-09

    Periods of high temperature have been widely found to be associated with excess mortality but with variable relationships in different cities. How these specifics depend on climatic and other characteristics of cities is not well understood. We assess summer temperature-mortality relationships using data from 50 provincial capitals in Spain, during the period 1990-2004. Poisson time series regression analyses were applied to daily temperature and mortality data, adjusting for potential confounding seasonal factors. Associations of heat with mortality were summarised for each city as the risk increments at the 99th compared to the 90th percentiles of the whole-year temperature distributions, as predicted from spline curves. Risk increments averaged 14.6% between both centiles, or 3.3% per 1 Celsius degree. Although risk increments varied substantially between cities, the range of temperature from the 90th to 99th centile was the only characteristic independently significantly associated with them. The heat increment did not depend on other city climatic, socio-demographic and geographic determinants. Cities in Spain are partially adapted to high mean summer temperatures but not to high variation in summer temperatures.

  14. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    NASA Astrophysics Data System (ADS)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  15. Prediction and control of chaotic processes using nonlinear adaptive networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.D.; Barnes, C.W.; Flake, G.W.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  16. Estimating future burned areas under changing climate in the EU-Mediterranean countries.

    PubMed

    Amatulli, Giuseppe; Camia, Andrea; San-Miguel-Ayanz, Jesús

    2013-04-15

    The impacts of climate change on forest fires have received increased attention in recent years at both continental and local scales. It is widely recognized that weather plays a key role in extreme fire situations. It is therefore of great interest to analyze projected changes in fire danger under climate change scenarios and to assess the consequent impacts of forest fires. In this study we estimated burned areas in the European Mediterranean (EU-Med) countries under past and future climate conditions. Historical (1985-2004) monthly burned areas in EU-Med countries were modeled by using the Canadian Fire Weather Index (CFWI). Monthly averages of the CFWI sub-indices were used as explanatory variables to estimate the monthly burned areas in each of the five most affected countries in Europe using three different modeling approaches (Multiple Linear Regression - MLR, Random Forest - RF, Multivariate Adaptive Regression Splines - MARS). MARS outperformed the other methods. Regression equations and significant coefficients of determination were obtained, although there were noticeable differences from country to country. Climatic conditions at the end of the 21st Century were simulated using results from the runs of the regional climate model HIRHAM in the European project PRUDENCE, considering two IPCC SRES scenarios (A2-B2). The MARS models were applied to both scenarios resulting in projected burned areas in each country and in the EU-Med region. Results showed that significant increases, 66% and 140% of the total burned area, can be expected in the EU-Med region under the A2 and B2 scenarios, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. A hybrid PCA-CART-MARS-based prognostic approach of the remaining useful life for aircraft engines.

    PubMed

    Sánchez Lasheras, Fernando; García Nieto, Paulino José; de Cos Juez, Francisco Javier; Mayo Bayón, Ricardo; González Suárez, Victor Manuel

    2015-03-23

    Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines.

  18. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  19. A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines

    PubMed Central

    Lasheras, Fernando Sánchez; Nieto, Paulino José García; de Cos Juez, Francisco Javier; Bayón, Ricardo Mayo; Suárez, Victor Manuel González

    2015-01-01

    Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines. PMID:25806876

  20. Hierarchical Volume Representation with 3{radical}2 Subdivision and Trivariate B-Spline Wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linsen, L; Gray, JT; Pascucci, V

    2002-01-11

    Multiresolution methods provide a means for representing data at multiple levels of detail. They are typically based on a hierarchical data organization scheme and update rules needed for data value computation. We use a data organization that is based on what we call n{radical}2 subdivision. The main advantage of subdivision, compared to quadtree (n = 2) or octree (n = 3) organizations, is that the number of vertices is only doubled in each subdivision step instead of multiplied by a factor of four or eight, respectively. To update data values we use n-variate B-spline wavelets, which yields better approximations formore » each level of detail. We develop a lifting scheme for n = 2 and n = 3 based on the n{radical}2-subdivision scheme. We obtain narrow masks that could also provide a basis for view-dependent visualization and adaptive refinement.« less

  1. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  2. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  3. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  4. Estimating trajectories of energy intake through childhood and adolescence using linear-spline multilevel models.

    PubMed

    Anderson, Emma L; Tilling, Kate; Fraser, Abigail; Macdonald-Wallis, Corrie; Emmett, Pauline; Cribb, Victoria; Northstone, Kate; Lawlor, Debbie A; Howe, Laura D

    2013-07-01

    Methods for the assessment of changes in dietary intake across the life course are underdeveloped. We demonstrate the use of linear-spline multilevel models to summarize energy-intake trajectories through childhood and adolescence and their application as exposures, outcomes, or mediators. The Avon Longitudinal Study of Parents and Children assessed children's dietary intake several times between ages 3 and 13 years, using both food frequency questionnaires (FFQs) and 3-day food diaries. We estimated energy-intake trajectories for 12,032 children using linear-spline multilevel models. We then assessed the associations of these trajectories with maternal body mass index (BMI), and later offspring BMI, and also their role in mediating the relation between maternal and offspring BMIs. Models estimated average and individual energy intake at 3 years, and linear changes in energy intake from age 3 to 7 years and from age 7 to 13 years. By including the exposure (in this example, maternal BMI) in the multilevel model, we were able to estimate the average energy-intake trajectories across levels of the exposure. When energy-intake trajectories are the exposure for a later outcome (in this case offspring BMI) or a mediator (between maternal and offspring BMI), results were similar, whether using a two-step process (exporting individual-level intercepts and slopes from multilevel models and using these in linear regression/path analysis), or a single-step process (multivariate multilevel models). Trajectories were similar when FFQs and food diaries were assessed either separately, or when combined into one model. Linear-spline multilevel models provide useful summaries of trajectories of dietary intake that can be used as an exposure, outcome, or mediator.

  5. Geometric and computer-aided spline hob modeling

    NASA Astrophysics Data System (ADS)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  6. Subgrouping Chronic Fatigue Syndrome Patients By Genetic and Immune Profiling

    DTIC Science & Technology

    2015-12-01

    participant inclusion was also verified against our master demographic file. This process revealed that only a small percentage of participants (...the ! ! − !!! , ∈ ℤ!| ≤ 7 , is a cubic -spline basis on three knots, ! is value of outcome for batch control, and is residual ...tests. Specifically, -value adjustments will employ an 8 adaptive two- stage linear step-up procedure to control the FDR at 5% (Benjamani et al. 2006

  7. Rational-Spline Subroutines

    NASA Technical Reports Server (NTRS)

    Schiess, James R.; Kerr, Patricia A.; Smith, Olivia C.

    1988-01-01

    Smooth curves drawn among plotted data easily. Rational-Spline Approximation with Automatic Tension Adjustment algorithm leads to flexible, smooth representation of experimental data. "Tension" denotes mathematical analog of mechanical tension in spline or other mechanical curve-fitting tool, and "spline" as denotes mathematical generalization of tool. Program differs from usual spline under tension, allows user to specify different values of tension between adjacent pairs of knots rather than constant tension over entire range of data. Subroutines use automatic adjustment scheme that varies tension parameter for each interval until maximum deviation of spline from line joining knots less than or equal to amount specified by user. Procedure frees user from drudgery of adjusting individual tension parameters while still giving control over local behavior of spline.

  8. An algorithm for surface smoothing with rational splines

    NASA Technical Reports Server (NTRS)

    Schiess, James R.

    1987-01-01

    Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.

  9. Multicenter Comparison of Machine Learning Methods and Conventional Regression for Predicting Clinical Deterioration on the Wards.

    PubMed

    Churpek, Matthew M; Yuen, Trevor C; Winslow, Christopher; Meltzer, David O; Kattan, Michael W; Edelson, Dana P

    2016-02-01

    Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. Observational cohort study. Five hospitals, from November 2008 until January 2013. Hospitalized ward patients None Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score, a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC, 0.80 [95% CI, 0.80-0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC, 0.77 vs 0.74; p < 0.01), and all models were more accurate than the MEWS (AUC, 0.70 [95% CI, 0.70-0.70]). In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards.

  10. Numerical Methods Using B-Splines

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Merriam, Marshal (Technical Monitor)

    1997-01-01

    The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.

  11. Interaction Models for Functional Regression.

    PubMed

    Usset, Joseph; Staicu, Ana-Maria; Maity, Arnab

    2016-02-01

    A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data.

  12. Variable selection and model choice in geoadditive regression models.

    PubMed

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  13. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  14. CABLE CONNECTOR

    DOEpatents

    Caller, J.M.

    1962-05-01

    An electrical connector is designed for utilization in connection with either round or flat coaxial cables. The connector comprises a bayonet-type coupling arrangement with a splined movable locking sleeve adapted to lock together components of the connector. A compression spring is attached to one of the connector components and functions to forcibly separate mating components when the locking sleeve is in an unlocked condition so as to minimize the possibility of leaving the conductors electrically coupled. (AEC)

  15. Spline screw payload fastening system

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.

  16. Improving the Spatial Prediction of Soil Organic Carbon Stocks in a Complex Tropical Mountain Landscape by Methodological Specifications in Machine Learning Approaches

    PubMed Central

    Schmidt, Johannes; Glaser, Bruno

    2016-01-01

    Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction. PMID:27128736

  17. Seasonally-Dynamic Presence-Only Species Distribution Models for a Cryptic Migratory Bat Impacted by Wind Energy Development

    PubMed Central

    Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution. PMID:26208098

  18. Improving the Spatial Prediction of Soil Organic Carbon Stocks in a Complex Tropical Mountain Landscape by Methodological Specifications in Machine Learning Approaches.

    PubMed

    Ließ, Mareike; Schmidt, Johannes; Glaser, Bruno

    2016-01-01

    Tropical forests are significant carbon sinks and their soils' carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms-including the model tuning and predictor selection-were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models' predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.

  19. Above ground biomass and tree species richness estimation with airborne lidar in tropical Ghana forests

    NASA Astrophysics Data System (ADS)

    Vaglio Laurin, Gaia; Puletti, Nicola; Chen, Qi; Corona, Piermaria; Papale, Dario; Valentini, Riccardo

    2016-10-01

    Estimates of forest aboveground biomass are fundamental for carbon monitoring and accounting; delivering information at very high spatial resolution is especially valuable for local management, conservation and selective logging purposes. In tropical areas, hosting large biomass and biodiversity resources which are often threatened by unsustainable anthropogenic pressures, frequent forest resources monitoring is needed. Lidar is a powerful tool to estimate aboveground biomass at fine resolution; however its application in tropical forests has been limited, with high variability in the accuracy of results. Lidar pulses scan the forest vertical profile, and can provide structure information which is also linked to biodiversity. In the last decade the remote sensing of biodiversity has received great attention, but few studies focused on the use of lidar for assessing tree species richness in tropical forests. This research aims at estimating aboveground biomass and tree species richness using discrete return airborne lidar in Ghana forests. We tested an advanced statistical technique, Multivariate Adaptive Regression Splines (MARS), which does not require assumptions on data distribution or on the relationships between variables, being suitable for studying ecological variables. We compared the MARS regression results with those obtained by multilinear regression and found that both algorithms were effective, but MARS provided higher accuracy either for biomass (R2 = 0.72) and species richness (R2 = 0.64). We also noted strong correlation between biodiversity and biomass field values. Even if the forest areas under analysis are limited in extent and represent peculiar ecosystems, the preliminary indications produced by our study suggest that instrument such as lidar, specifically useful for pinpointing forest structure, can also be exploited as a support for tree species richness assessment.

  20. Seasonally-dynamic presence-only species distribution models for a cryptic migratory bat impacted by wind energy development

    USGS Publications Warehouse

    Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  1. Development of Ensemble Model Based Water Demand Forecasting Model

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop

    2014-05-01

    In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)

  2. Using data mining to predict success in a weight loss trial.

    PubMed

    Batterham, M; Tapsell, L; Charlton, K; O'Shea, J; Thorne, R

    2017-08-01

    Traditional methods for predicting weight loss success use regression approaches, which make the assumption that the relationships between the independent and dependent (or logit of the dependent) variable are linear. The aim of the present study was to investigate the relationship between common demographic and early weight loss variables to predict weight loss success at 12 months without making this assumption. Data mining methods (decision trees, generalised additive models and multivariate adaptive regression splines), in addition to logistic regression, were employed to predict: (i) weight loss success (defined as ≥5%) at the end of a 12-month dietary intervention using demographic variables [body mass index (BMI), sex and age]; percentage weight loss at 1 month; and (iii) the difference between actual and predicted weight loss using an energy balance model. The methods were compared by assessing model parsimony and the area under the curve (AUC). The decision tree provided the most clinically useful model and had a good accuracy (AUC 0.720 95% confidence interval = 0.600-0.840). Percentage weight loss at 1 month (≥0.75%) was the strongest predictor for successful weight loss. Within those individuals losing ≥0.75%, individuals with a BMI (≥27 kg m -2 ) were more likely to be successful than those with a BMI between 25 and 27 kg m -2 . Data mining methods can provide a more accurate way of assessing relationships when conventional assumptions are not met. In the present study, a decision tree provided the most parsimonious model. Given that early weight loss cannot be predicted before randomisation, incorporating this information into a post randomisation trial design may give better weight loss results. © 2017 The British Dietetic Association Ltd.

  3. Comparing methods for estimation of heterogeneous treatment effects using observational data from health care databases.

    PubMed

    Wendling, T; Jung, K; Callahan, A; Schuler, A; Shah, N H; Gallego, B

    2018-06-03

    There is growing interest in using routinely collected data from health care databases to study the safety and effectiveness of therapies in "real-world" conditions, as it can provide complementary evidence to that of randomized controlled trials. Causal inference from health care databases is challenging because the data are typically noisy, high dimensional, and most importantly, observational. It requires methods that can estimate heterogeneous treatment effects while controlling for confounding in high dimensions. Bayesian additive regression trees, causal forests, causal boosting, and causal multivariate adaptive regression splines are off-the-shelf methods that have shown good performance for estimation of heterogeneous treatment effects in observational studies of continuous outcomes. However, it is not clear how these methods would perform in health care database studies where outcomes are often binary and rare and data structures are complex. In this study, we evaluate these methods in simulation studies that recapitulate key characteristics of comparative effectiveness studies. We focus on the conditional average effect of a binary treatment on a binary outcome using the conditional risk difference as an estimand. To emulate health care database studies, we propose a simulation design where real covariate and treatment assignment data are used and only outcomes are simulated based on nonparametric models of the real outcomes. We apply this design to 4 published observational studies that used records from 2 major health care databases in the United States. Our results suggest that Bayesian additive regression trees and causal boosting consistently provide low bias in conditional risk difference estimates in the context of health care database studies. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Concentration-Dependent Antagonism and Culture Conversion in Pulmonary Tuberculosis

    PubMed Central

    Pasipanodya, Jotam G.; Denti, Paolo; Sirgel, Frederick; Lesosky, Maia; Gumbo, Tawanda; Meintjes, Graeme; McIlleron, Helen; Wilkinson, Robert J.

    2017-01-01

    Abstract Background. There is scant evidence to support target drug exposures for optimal tuberculosis outcomes. We therefore assessed whether pharmacokinetic/pharmacodynamic (PK/PD) parameters could predict 2-month culture conversion. Methods. One hundred patients with pulmonary tuberculosis (65% human immunodeficiency virus coinfected) were intensively sampled to determine rifampicin, isoniazid, and pyrazinamide plasma concentrations after 7–8 weeks of therapy, and PK parameters determined using nonlinear mixed-effects models. Detailed clinical data and sputum for culture were collected at baseline, 2 months, and 5–6 months. Minimum inhibitory concentrations (MICs) were determined on baseline isolates. Multivariate logistic regression and the assumption-free multivariate adaptive regression splines (MARS) were used to identify clinical and PK/PD predictors of 2-month culture conversion. Potential PK/PD predictors included 0- to 24-hour area under the curve (AUC0-24), maximum concentration (Cmax), AUC0-24/MIC, Cmax/MIC, and percentage of time that concentrations persisted above the MIC (%TMIC). Results. Twenty-six percent of patients had Cmax of rifampicin <8 mg/L, pyrazinamide <35 mg/L, and isoniazid <3 mg/L. No relationship was found between PK exposures and 2-month culture conversion using multivariate logistic regression after adjusting for MIC. However, MARS identified negative interactions between isoniazid Cmax and rifampicin Cmax/MIC ratio on 2-month culture conversion. If isoniazid Cmax was <4.6 mg/L and rifampicin Cmax/MIC <28, the isoniazid concentration had an antagonistic effect on culture conversion. For patients with isoniazid Cmax >4.6 mg/L, higher isoniazid exposures were associated with improved rates of culture conversion. Conclusions. PK/PD analyses using MARS identified isoniazid Cmax and rifampicin Cmax/MIC thresholds below which there is concentration-dependent antagonism that reduces 2-month sputum culture conversion. PMID:28205671

  5. Seasonally-Dynamic Presence-Only Species Distribution Models for a Cryptic Migratory Bat Impacted by Wind Energy Development.

    PubMed

    Hayes, Mark A; Cryan, Paul M; Wunder, Michael B

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn-the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as 'risk from turbines is highest in habitats between hoary bat summering and wintering grounds'. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  6. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  7. Improving the Diagnostic Specificity of CT for Early Detection of Lung Cancer: 4D CT-Based Pulmonary Nodule Elastometry

    DTIC Science & Technology

    2013-08-01

    as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the use of B- spline ...FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence...Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions on Medical Imaging. 2001;20(6):526-34

  8. Adaptive guidance for an aero-assisted boost vehicle

    NASA Astrophysics Data System (ADS)

    Pamadi, Bandu N.; Taylor, Lawrence W., Jr.; Price, Douglas B.

    An adaptive guidance system incorporating dynamic pressure constraint is studied for a single stage to low earth orbit (LEO) aero-assist booster with thrust gimbal angle as the control variable. To derive an adaptive guidance law, cubic spline functions are used to represent the ascent profile. The booster flight to LEO is divided into initial and terminal phases. In the initial phase, the ascent profile is continuously updated to maximize the performance of the boost vehicle enroute. A linear feedback control is used in the terminal phase to guide the aero-assisted booster onto the desired LEO. The computer simulation of the vehicle dynamics considers a rotating spherical earth, inverse square (Newtonian) gravity field and an exponential model for the earth's atmospheric density. This adaptive guidance algorithm is capable of handling large deviations in both atmospheric conditions and modeling uncertainties, while ensuring maximum booster performance.

  9. Regional modeling of large wildfires under current and potential future climates in Colorado and Wyoming, USA

    USGS Publications Warehouse

    West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.

    2016-01-01

    Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.

  10. Three-dimensional analysis of anisotropic spatially reinforced structures

    NASA Technical Reports Server (NTRS)

    Bogdanovich, Alexander E.

    1993-01-01

    The material-adaptive three-dimensional analysis of inhomogeneous structures based on the meso-volume concept and application of deficient spline functions for displacement approximations is proposed. The general methodology is demonstrated on the example of a brick-type mosaic parallelepiped arbitrarily composed of anisotropic meso-volumes. A partition of each meso-volume into sub-elements, application of deficient spline functions for a local approximation of displacements and, finally, the use of the variational principle allows one to obtain displacements, strains, and stresses at anypoint within the structural part. All of the necessary external and internal boundary conditions (including the conditions of continuity of transverse stresses at interfaces between adjacent meso-volumes) can be satisfied with requisite accuracy by increasing the density of the sub-element mesh. The application of the methodology to textile composite materials is described. Several numerical examples for woven and braided rectangular composite plates and stiffened panels under transverse bending are considered. Some typical effects of stress concentrations due to the material inhomogeneities are demonstrated.

  11. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  12. Bivariate discrete beta Kernel graduation of mortality data.

    PubMed

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  13. GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation

    DOE PAGES

    Jiang, Bo; Liang, Shunlin; Ma, Han; ...

    2016-03-09

    Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less

  14. A Deformable Atlas of the Laboratory Mouse

    PubMed Central

    Wang, Hongkai; Stout, David B.; Chatziioannou, Arion F.

    2015-01-01

    Purpose This paper presents a deformable mouse atlas of the laboratory mouse anatomy. This atlas is fully articulated and can be positioned into arbitrary body poses. The atlas can also adapt body weight by changing body length and fat amount. Procedures A training set of 103 micro-CT images was used to construct the atlas. A cage-based deformation method was applied to realize the articulated pose change. The weight-related body deformation was learned from the training set using a linear regression method. A conditional Gaussian model and thin-plate spline mapping were used to deform the internal organs following the changes of pose and weight. Results The atlas was deformed into different body poses and weights, and the deformation results were more realistic compared to the results achieved with other mouse atlases. The organ weights of this atlas matched well with the measurements of real mouse organ weights. This atlas can also be converted into voxelized images with labeled organs, pseudo CT images and tetrahedral mesh for phantom studies. Conclusions With the unique ability of articulated pose and weight changes, the deformable laboratory mouse atlas can become a valuable tool for preclinical image analysis. PMID:25049072

  15. GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bo; Liang, Shunlin; Ma, Han

    Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less

  16. Comparison of Nine Statistical Model Based Warfarin Pharmacogenetic Dosing Algorithms Using the Racially Diverse International Warfarin Pharmacogenetic Consortium Cohort Database

    PubMed Central

    Liu, Rong; Li, Xi; Zhang, Wei; Zhou, Hong-Hao

    2015-01-01

    Objective Multiple linear regression (MLR) and machine learning techniques in pharmacogenetic algorithm-based warfarin dosing have been reported. However, performances of these algorithms in racially diverse group have never been objectively evaluated and compared. In this literature-based study, we compared the performances of eight machine learning techniques with those of MLR in a large, racially-diverse cohort. Methods MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied in warfarin dose algorithms in a cohort from the International Warfarin Pharmacogenetics Consortium database. Covariates obtained by stepwise regression from 80% of randomly selected patients were used to develop algorithms. To compare the performances of these algorithms, the mean percentage of patients whose predicted dose fell within 20% of the actual dose (mean percentage within 20%) and the mean absolute error (MAE) were calculated in the remaining 20% of patients. The performances of these techniques in different races, as well as the dose ranges of therapeutic warfarin were compared. Robust results were obtained after 100 rounds of resampling. Results BART, MARS and SVR were statistically indistinguishable and significantly out performed all the other approaches in the whole cohort (MAE: 8.84–8.96 mg/week, mean percentage within 20%: 45.88%–46.35%). In the White population, MARS and BART showed higher mean percentage within 20% and lower mean MAE than those of MLR (all p values < 0.05). In the Asian population, SVR, BART, MARS and LAR performed the same as MLR. MLR and LAR optimally performed among the Black population. When patients were grouped in terms of warfarin dose range, all machine learning techniques except ANN and LAR showed significantly higher mean percentage within 20%, and lower MAE (all p values < 0.05) than MLR in the low- and high- dose ranges. Conclusion Overall, machine learning-based techniques, BART, MARS and SVR performed superior than MLR in warfarin pharmacogenetic dosing. Differences of algorithms’ performances exist among the races. Moreover, machine learning-based algorithms tended to perform better in the low- and high- dose ranges than MLR. PMID:26305568

  17. TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)

    EPA Science Inventory

    A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...

  18. Color management with a hammer: the B-spline fitter

    NASA Astrophysics Data System (ADS)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  19. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  20. Factors relating to windblown dust in associations between ...

    EPA Pesticide Factsheets

    Introduction: In effect estimates of city-specific PM2.5-mortality associations across United States (US), there exists a substantial amount of spatial heterogeneity. Some of this heterogeneity may be due to mass distribution of PM; areas where PM2.5 is likely to be dominated by large size fractions (above 1 micron; e.g., the contribution of windblown dust), may have a weaker association with mortality. Methods: Log rate ratios (betas) for the PM2.5-mortality association—derived from a model adjusting for time, an interaction with age-group, day of week, and natural splines of current temperature, current dew point, and unconstrained temperature at lags 1, 2, and 3, for 313 core-based statistical areas (CBSA) and their metropolitan divisions (MD) over 1999-2005—were used as the outcome. Using inverse variance weighted linear regression, we examined change in log rate ratios in association with PM10-PM2.5 correlation as a marker of windblown dust/higher PM size fraction; linearity of associations was assessed in models using splines with knots at quintile values. Results: Weighted mean PM2.5 association (0.96 percent increase in total non-accidental mortality for a 10 ug/m3 increment in PM2.5) increased by 0.34 (95% confidence interval: 0.20, 0.48) per interquartile change (0.25) in the PM10-PM2.5 correlation, and explained approximately 8% of the observed heterogeneity; the association was linear based on spline analysis. Conclusions: Preliminary results pro

  1. Early Changes in Clinical, Functional, and Laboratory Biomarkers in Workers at Risk of Indium Lung Disease

    PubMed Central

    Virji, M. Abbas; Trapnell, Bruce C.; Carey, Brenna; Healey, Terrance; Kreiss, Kathleen

    2014-01-01

    Rationale: Occupational exposure to indium compounds, including indium–tin oxide, can result in potentially fatal indium lung disease. However, the early effects of exposure on the lungs are not well understood. Objectives: To determine the relationship between short-term occupational exposures to indium compounds and the development of early lung abnormalities. Methods: Among indium–tin oxide production and reclamation facility workers, we measured plasma indium, respiratory symptoms, pulmonary function, chest computed tomography, and serum biomarkers of lung disease. Relationships between plasma indium concentration and health outcome variables were evaluated using restricted cubic spline and linear regression models. Measurements and Main Results: Eighty-seven (93%) of 94 indium–tin oxide facility workers (median tenure, 2 yr; median plasma indium, 1.0 μg/l) participated in the study. Spirometric abnormalities were not increased compared with the general population, and few subjects had radiographic evidence of alveolar proteinosis (n = 0), fibrosis (n = 2), or emphysema (n = 4). However, in internal comparisons, participants with plasma indium concentrations ≥ 1.0 μg/l had more dyspnea, lower mean FEV1 and FVC, and higher median serum Krebs von den Lungen-6 and surfactant protein-D levels. Spline regression demonstrated nonlinear exposure response, with significant differences occurring at plasma indium concentrations as low as 1.0 μg/l compared with the reference. Associations between health outcomes and the natural log of plasma indium concentration were evident in linear regression models. Associations were not explained by age, smoking status, facility tenure, or prior occupational exposures. Conclusions: In indium–tin oxide facility workers with short-term, low-level exposure, plasma indium concentrations lower than previously reported were associated with lung symptoms, decreased spirometric parameters, and increased serum biomarkers of lung disease. PMID:25295756

  2. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  3. Body mass index in relation to serum prostate-specific antigen levels and prostate cancer risk.

    PubMed

    Bonn, Stephanie E; Sjölander, Arvid; Tillander, Annika; Wiklund, Fredrik; Grönberg, Henrik; Bälter, Katarina

    2016-07-01

    High Body mass index (BMI) has been directly associated with risk of aggressive or fatal prostate cancer. One possible explanation may be an effect of BMI on serum levels of prostate-specific antigen (PSA). To study the association between BMI and serum PSA as well as prostate cancer risk, a large cohort of men without prostate cancer at baseline was followed prospectively for prostate cancer diagnoses until 2015. Serum PSA and BMI were assessed among 15,827 men at baseline in 2010-2012. During follow-up, 735 men were diagnosed with prostate cancer with 282 (38.4%) classified as high-grade cancers. Multivariable linear regression models and natural cubic linear regression splines were fitted for analyses of BMI and log-PSA. For risk analysis, Cox proportional hazards regression models were used to estimate hazard ratios (HR) and 95% confidence intervals (CI) and natural cubic Cox regression splines producing standardized cancer-free probabilities were fitted. Results showed that baseline Serum PSA decreased by 1.6% (95% CI: -2.1 to -1.1) with every one unit increase in BMI. Statistically significant decreases of 3.7, 11.7 and 32.3% were seen for increasing BMI-categories of 25 < 30, 30 < 35 and ≥35 kg/m(2), respectively, compared to the reference (18.5 < 25 kg/m(2)). No statistically significant associations were seen between BMI and prostate cancer risk although results were indicative of a positive association to incidence rates of high-grade disease and an inverse association to incidence of low-grade disease. However, findings regarding risk are limited by the short follow-up time. In conclusion, BMI was inversely associated to PSA-levels. BMI should be taken into consideration when referring men to a prostate biopsy based on serum PSA-levels. © 2016 UICC.

  4. XMGR5 users manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, K.R.; Fisher, J.E.

    1997-03-01

    ACE/gr is XY plotting tool for workstations or X-terminals using X. A few of its features are: User defined scaling, tick marks, labels, symbols, line styles, colors. Batch mode for unattended plotting. Read and write parameters used during a session. Polynomial regression, splines, running averages, DFT/FFT, cross/auto-correlation. Hardcopy support for PostScript, HP-GL, and FrameMaker.mif format. While ACE/gr has a convenient point-and-click interface, most parameter settings and operations are available through a command line interface (found in Files/Commands).

  5. Projecting 2D gene expression data into 3D and 4D space.

    PubMed

    Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D

    2007-04-01

    Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.

  6. Genomic Bayesian functional regression models with interactions for predicting wheat grain yield using hyper-spectral image data.

    PubMed

    Montesinos-López, Abelardo; Montesinos-López, Osval A; Cuevas, Jaime; Mata-López, Walter A; Burgueño, Juan; Mondal, Sushismita; Huerta, Julio; Singh, Ravi; Autrique, Enrique; González-Pérez, Lorena; Crossa, José

    2017-01-01

    Modern agriculture uses hyperspectral cameras that provide hundreds of reflectance data at discrete narrow bands in many environments. These bands often cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra. With the bands, vegetation indices are constructed for predicting agronomically important traits such as grain yield and biomass. However, since vegetation indices only use some wavelengths (referred to as bands), we propose using all bands simultaneously as predictor variables for the primary trait grain yield; results of several multi-environment maize (Aguate et al. in Crop Sci 57(5):1-8, 2017) and wheat (Montesinos-López et al. in Plant Methods 13(4):1-23, 2017) breeding trials indicated that using all bands produced better prediction accuracy than vegetation indices. However, until now, these prediction models have not accounted for the effects of genotype × environment (G × E) and band × environment (B × E) interactions incorporating genomic or pedigree information. In this study, we propose Bayesian functional regression models that take into account all available bands, genomic or pedigree information, the main effects of lines and environments, as well as G × E and B × E interaction effects. The data set used is comprised of 976 wheat lines evaluated for grain yield in three environments (Drought, Irrigated and Reduced Irrigation). The reflectance data were measured in 250 discrete narrow bands ranging from 392 to 851 nm (nm). The proposed Bayesian functional regression models were implemented using two types of basis: B-splines and Fourier. Results of the proposed Bayesian functional regression models, including all the wavelengths for predicting grain yield, were compared with results from conventional models with and without bands. We observed that the models with B × E interaction terms were the most accurate models, whereas the functional regression models (with B-splines and Fourier basis) and the conventional models performed similarly in terms of prediction accuracy. However, the functional regression models are more parsimonious and computationally more efficient because the number of beta coefficients to be estimated is 21 (number of basis), rather than estimating the 250 regression coefficients for all bands. In this study adding pedigree or genomic information did not increase prediction accuracy.

  7. Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.; Horn, P. W.

    1994-01-01

    Most of the time on this project was spent on the trajectory planning problem. The construction is equivalent to the classical spline construction in the case that the system matrix is nilpotent. If the dimension of the system is n then the spline of degree 2n-1 is constructed. This gives a new approach to the construction of splines that is more efficient than the usual construction and at the same time allows the construction of a much larger class of splines. All known classes of splines are reconstructed using the approach of linear control theory. As a numerical analysis tool control theory gives a very good tool for constructing splines. However, for the purposes of trajectory planning it is quite another story. Enclosed in this document are four reports done under this grant.

  8. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Keller, J.; Wallen, R.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  9. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  10. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  11. Interpolation by new B-splines on a four directional mesh of the plane

    NASA Astrophysics Data System (ADS)

    Nouisser, O.; Sbibih, D.

    2004-01-01

    In this paper we construct new simple and composed B-splines on the uniform four directional mesh of the plane, in order to improve the approximation order of B-splines studied in Sablonniere (in: Program on Spline Functions and the Theory of Wavelets, Proceedings and Lecture Notes, Vol. 17, University of Montreal, 1998, pp. 67-78). If φ is such a simple B-spline, we first determine the space of polynomials with maximal total degree included in , and we prove some results concerning the linear independence of the family . Next, we show that the cardinal interpolation with φ is correct and we study in S(φ) a Lagrange interpolation problem. Finally, we define composed B-splines by repeated convolution of φ with the characteristic functions of a square or a lozenge, and we give some of their properties.

  12. On the spline-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.

  13. An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry

    DTIC Science & Technology

    2015-12-01

    ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  14. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  15. Introduction to methodology of dose-response meta-analysis for binary outcome: With application on software.

    PubMed

    Zhang, Chao; Jia, Pengli; Yu, Liu; Xu, Chang

    2018-05-01

    Dose-response meta-analysis (DRMA) is widely applied to investigate the dose-specific relationship between independent and dependent variables. Such methods have been in use for over 30 years and are increasingly employed in healthcare and clinical decision-making. In this article, we give an overview of the methodology used in DRMA. We summarize the commonly used regression model and the pooled method in DRMA. We also use an example to illustrate how to employ a DRMA by these methods. Five regression models, linear regression, piecewise regression, natural polynomial regression, fractional polynomial regression, and restricted cubic spline regression, were illustrated in this article to fit the dose-response relationship. And two types of pooling approaches, that is, one-stage approach and two-stage approach are illustrated to pool the dose-response relationship across studies. The example showed similar results among these models. Several dose-response meta-analysis methods can be used for investigating the relationship between exposure level and the risk of an outcome. However the methodology of DRMA still needs to be improved. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  16. Three-dimensional prediction of soil physical, chemical, and hydrological properties in a forested catchment of the Santa Catalina CZO

    NASA Astrophysics Data System (ADS)

    Shepard, C.; Holleran, M.; Lybrand, R. A.; Rasmussen, C.

    2014-12-01

    Understanding critical zone evolution and function requires an accurate assessment of local soil properties. Two-dimensional (2D) digital soil mapping provides a general assessment of soil characteristics across a sampled landscape, but lacks the ability to predict soil properties with depth. The utilization of mass-preserving spline functions enable the extrapolation of soil properties with depth, extending predictive functions to three-dimensions (3D). The present study was completed in the Marshall Gulch (MG) catchment, located in the Santa Catalina Mountains, 30 km northwest of Tucson, Arizona, as part of the Santa Catalina-Jemez Mountains Critical Zone Observatory. Twenty-four soil pits were excavated and described following standard procedures. Mass-preserving splines were used to extrapolate mass carbon (kg C m-2); percent clay, silt, and sand (%); sodium mass flux (kg Na m-2); and pH for 24 sampled soil pits in 1-cm depth increments. Saturated volumetric water content (θs) and volumetric water content at 10 kPa (θ10) were predicted using ROSETTA and established empirical relationships. The described profiles were all sampled to differing depths; to compensate for the unevenness of the profile descriptions, the soil depths were standardized from 0.0 to 1.0 and then split into five equal standard depth sections. A logit-transformation was used to normalize the target variables. Step-wise regressions were calculated using available environmental covariates to predict the properties of each variable across the catchment in each depth section, and interpolated model residuals added back to the predicted layers to generate the final soil maps. Logit-transformed R2 for the predictive functions varied widely, ranging from 0.20 to 0.79, with logit-transformed RMSE ranging from 0.15 to 2.77. The MG catchment was further classified into clusters with similar properties based on the environmental covariates, and representative depth functions for each target variable in each cluster calculated. Mass-preserving splines combined with stepwise regressions are an effective tool for predicting soil physical, chemical, and hydrological properties with depth, enhancing our understanding of the critical zone.

  17. [Relationship between shift work and overweight/obesity in male steel workers].

    PubMed

    Xiao, M Y; Wang, Z Y; Fan, H M; Che, C L; Lu, Y; Cong, L X; Gao, X J; Liu, Y J; Yuan, J X; Li, X M; Hu, B; Chen, Y P

    2016-11-10

    Objective: To investigate the relationship between shift work and overweight/obesity in male steel workers. Methods: A questionnaire survey was conducted among the male steel workers selected during health examination in Tangshan Steel Company from March 2015 to March 2016. The relationship between shift work and overweight/obesity in the male steel workers were analyzed by using logistic regression model and restricted cubic splinemodel. Results: A total of 7 262 male steel workers were surveyed, the overall prevalence of overweight/obesitywas 64.5% (4 686/7 262), the overweight rate was 34.3% and the obesity rate was 30.2%, respectively. After adjusting for age, educational level and average family income level per month by multivariable logistic regression analysis, shift work was associated with overweight/obesity and obesity in the male steel workers. The OR was 1.19(95% CI : 1.05-1.35) and 1.15(95% CI : 1.00-1.32). Restricted cubic spline model analysis showed that the relationship between shift work years and overweight/obesity in the male steel workers was a nonlinear dose response one (nonlinear test χ 2 =7.43, P <0.05). Restricted cubic spline model analysis showed that the relationship between shift work years and obesity in the male steel workers was a nonlinear dose response one (nonlinear test χ 2 =10.48, P <0.05). Conclusion: Shift work was associated with overweight and obesity in the male steel workers, and shift work years and overweight/obesity had a nonlinear relationship.

  18. GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data

    NASA Astrophysics Data System (ADS)

    Ibrahim, Noor Akma; Suliadi

    2010-11-01

    In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.

  19. Smoothing Spline ANOVA Decomposition of Arbitrary Splines: An Application to Eye Movements in Reading

    PubMed Central

    Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias

    2015-01-01

    The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading. PMID:25816246

  20. Estimation of soil clay and organic matter using two quantitative methods (PLSR and MARS) based on reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Nawar, Said; Buddenbaum, Henning; Hill, Joachim

    2014-05-01

    A rapid and inexpensive soil analytical technique is needed for soil quality assessment and accurate mapping. This study investigated a method for improved estimation of soil clay (SC) and organic matter (OM) using reflectance spectroscopy. Seventy soil samples were collected from Sinai peninsula in Egypt to estimate the soil clay and organic matter relative to the soil spectra. Soil samples were scanned with an Analytical Spectral Devices (ASD) spectrometer (350-2500 nm). Three spectral formats were used in the calibration models derived from the spectra and the soil properties: (1) original reflectance spectra (OR), (2) first-derivative spectra smoothened using the Savitzky-Golay technique (FD-SG) and (3) continuum-removed reflectance (CR). Partial least-squares regression (PLSR) models using the CR of the 400-2500 nm spectral region resulted in R2 = 0.76 and 0.57, and RPD = 2.1 and 1.5 for estimating SC and OM, respectively, indicating better performance than that obtained using OR and SG. The multivariate adaptive regression splines (MARS) calibration model with the CR spectra resulted in an improved performance (R2 = 0.89 and 0.83, RPD = 3.1 and 2.4) for estimating SC and OM, respectively. The results show that the MARS models have a great potential for estimating SC and OM compared with PLSR models. The results obtained in this study have potential value in the field of soil spectroscopy because they can be applied directly to the mapping of soil properties using remote sensing imagery in arid environment conditions. Key Words: soil clay, organic matter, PLSR, MARS, reflectance spectroscopy.

  1. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  2. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    PubMed Central

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  3. WNN 92; Proceedings of the 3rd Workshop on Neural Networks: Academic/Industrial/NASA/Defense, Auburn Univ., AL, Feb. 10-12, 1992 and South Shore Harbour, TX, Nov. 4-6, 1992

    NASA Technical Reports Server (NTRS)

    Padgett, Mary L. (Editor)

    1993-01-01

    The present conference discusses such neural networks (NN) related topics as their current development status, NN architectures, NN learning rules, NN optimization methods, NN temporal models, NN control methods, NN pattern recognition systems and applications, biological and biomedical applications of NNs, VLSI design techniques for NNs, NN systems simulation, fuzzy logic, and genetic algorithms. Attention is given to missileborne integrated NNs, adaptive-mixture NNs, implementable learning rules, an NN simulator for travelling salesman problem solutions, similarity-based forecasting, NN control of hypersonic aircraft takeoff, NN control of the Space Shuttle Arm, an adaptive NN robot manipulator controller, a synthetic approach to digital filtering, NNs for speech analysis, adaptive spline networks, an anticipatory fuzzy logic controller, and encoding operations for fuzzy associative memories.

  4. Student Support for Research in Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.

    1999-01-01

    Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.

  5. Beta-function B-spline smoothing on triangulations

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Zanaty, Peter

    2013-03-01

    In this work we investigate a novel family of Ck-smooth rational basis functions on triangulations for fitting, smoothing, and denoising geometric data. The introduced basis function is closely related to a recently introduced general method introduced in utilizing generalized expo-rational B-splines, which provides Ck-smooth convex resolutions of unity on very general disjoint partitions and overlapping covers of multidimensional domains with complex geometry. One of the major advantages of this new triangular construction is its locality with respect to the star-1 neighborhood of the vertex on which the said base is providing Hermite interpolation. This locality of the basis functions can be in turn utilized in adaptive methods, where, for instance a local refinement of the underlying triangular mesh affects only the refined domain, whereas, in other method one needs to investigate what changes are occurring outside of the refined domain. Both the triangular and the general smooth constructions have the potential to become a new versatile tool of Computer Aided Geometric Design (CAGD), Finite and Boundary Element Analysis (FEA/BEA) and Iso-geometric Analysis (IGA).

  6. Improving the Diagnostic Specificity of CT for Early Detection of Lung Cancer: 4D CT-Based Pulmonary Nodule Elastometry

    DTIC Science & Technology

    2013-08-01

    transformation models, such as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the...research project. References: 1. Bookstein FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern...Rohr K, Stiehl HS, Sprengel R, Buzug TM, Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions

  7. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruberti, M.; Averbukh, V.; Decleva, P.

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also presentmore » the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.« less

  8. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    PubMed

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Multicategorical Spline Model for Item Response Theory.

    ERIC Educational Resources Information Center

    Abrahamowicz, Michal; Ramsay, James O.

    1992-01-01

    A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)

  10. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  11. Fitting multidimensional splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  12. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  13. Numerical solution of system of boundary value problems using B-spline with free parameter

    NASA Astrophysics Data System (ADS)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  14. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  15. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    NASA Astrophysics Data System (ADS)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  16. Prediction of Frequency for Simulation of Asphalt Mix Fatigue Tests Using MARS and ANN

    PubMed Central

    Fakhri, Mansour

    2014-01-01

    Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000) four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) methods were then employed to predict the effective length (i.e., frequency) of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN) are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation. PMID:24688400

  17. Prediction of frequency for simulation of asphalt mix fatigue tests using MARS and ANN.

    PubMed

    Ghanizadeh, Ali Reza; Fakhri, Mansour

    2014-01-01

    Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000) four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) methods were then employed to predict the effective length (i.e., frequency) of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN) are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation.

  18. Uncertainty quantification analysis of the dynamics of an electrostatically actuated microelectromechanical switch model

    NASA Astrophysics Data System (ADS)

    Snow, Michael G.; Bajaj, Anil K.

    2015-08-01

    This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.

  19. SPSS macros to compare any two fitted values from a regression model.

    PubMed

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  20. MATERNAL CHRONOLOGICAL AGE, PRENATAL AND PERINATAL HISTORY, SOCIAL SUPPORT, AND PARENTING OF INFANTS

    PubMed Central

    Bornstein, Marc H.; Putnick, Diane L.; Suwalsky, Joan T. D.; Gini, Motti

    2018-01-01

    The role of maternal chronological age in prenatal and perinatal history, social support, and parenting practices of new mothers (N = 335) was examined. Primiparas of 5-month-old infants ranged in age from 13 to 42 years. Age effects were zero, linear, and nonlinear. Nonlinear age effects were significantly associated up to a certain age with little or no association afterward; by spline regression, estimated points at which the slope of the regression line changed were 25 years for prenatal and perinatal history, 31 years for social supports, and 27 years for parenting practices. Given the expanding age range of first-time parents, these findings underscore the importance of incorporating maternal age as a factor in studies of parenting and child development. PMID:16942495

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bueno, G.; Ruiz, M.; Sanchez, S

    Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.

  2. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.

  3. Conformal Solid T-spline Construction from Boundary T-spline Representations

    DTIC Science & Technology

    2012-07-01

    TITLE AND SUBTITLE Conformal Solid T-spline Construction from Boundary T-spline Representations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...Zhang’s ONR-YIP award N00014-10-1-0698 and an ONR Grant N00014-08-1-0653. The work of T. J.R. Hughes was supported by ONR Grant N00014-08-1-0992, NSF...GOALI CMI-0700807/0700204, NSF CMMI-1101007 and a SINTEF grant UTA10-000374. References 1. M. Aigner, C. Heinrich, B. Jüttler, E. Pilgerstorfer, B

  4. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.

    PubMed

    Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).

  5. Prediction of energy expenditure and physical activity in preschoolers.

    PubMed

    Butte, Nancy F; Wong, William W; Lee, Jong Soo; Adolph, Anne L; Puyau, Maurice R; Zakeri, Issa F

    2014-06-01

    Accurate, nonintrusive, and feasible methods are needed to predict energy expenditure (EE) and physical activity (PA) levels in preschoolers. Herein, we validated cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on accelerometry and heart rate (HR) for the prediction of EE using room calorimetry and doubly labeled water (DLW) and established accelerometry cut points for PA levels. Fifty preschoolers, mean ± SD age of 4.5 ± 0.8 yr, participated in room calorimetry for minute-by-minute measurements of EE, accelerometer counts (AC) (Actiheart and ActiGraph GT3X+), and HR (Actiheart). Free-living 105 children, ages 4.6 ± 0.9 yr, completed the 7-d DLW procedure while wearing the devices. AC cut points for PA levels were established using smoothing splines and receiver operating characteristic curves. On the basis of calorimetry, mean percent errors for EE were -2.9% ± 10.8% and -1.1% ± 7.4% for CSTS models and -1.9% ± 9.6% and 1.3% ± 8.1% for MARS models using the Actiheart and ActiGraph+HR devices, respectively. On the basis of DLW, mean percent errors were -0.5% ± 9.7% and 4.1% ± 8.5% for CSTS models and 3.2% ± 10.1% and 7.5% ± 10.0% for MARS models using the Actiheart and ActiGraph+HR devices, respectively. Applying activity EE thresholds, final accelerometer cut points were determined: 41, 449, and 1297 cpm for Actiheart x-axis; 820, 3908, and 6112 cpm for ActiGraph vector magnitude; and 240, 2120, and 4450 cpm for ActiGraph x-axis for sedentary/light, light/moderate, and moderate/vigorous PA (MVPA), respectively. On the basis of confusion matrices, correctly classified rates were 81%-83% for sedentary PA, 58%-64% for light PA, and 62%-73% for MVPA. The lack of bias and acceptable limits of agreement affirms the validity of the CSTS and MARS models for the prediction of EE in preschool-aged children. Accelerometer cut points are satisfactory for the classification of sedentary, light, and moderate/vigorous levels of PA in preschoolers.

  6. Modeling positional effects of regulatory sequences with spline transformations increases prediction accuracy of deep neural networks

    PubMed Central

    Avsec, Žiga; Cheng, Jun; Gagneur, Julien

    2018-01-01

    Abstract Motivation Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Results Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Availability and implementation Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at https://github.com/gagneurlab/Manuscript_Avsec_Bioinformatics_2017. Contact avsec@in.tum.de or gagneur@in.tum.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29155928

  7. Development and validation of a shared decision-making instrument for health-related quality of life one year after total hip replacement based on quality registries data.

    PubMed

    Nemes, Szilard; Rolfson, Ola; Garellick, Göran

    2018-02-01

    Clinicians considering improvements in health-related quality of life (HRQoL) after total hip replacement (THR) must account for multiple pieces of information. Evidence-based decisions are important to best assess the effect of THR on HRQoL. This work aims at constructing a shared decision-making tool that helps clinicians assessing the future benefits of THR by offering predictions of 1-year postoperative HRQoL of THR patients. We used data from the Swedish Hip Arthroplasty Register. Data from 2008 were used as training set and data from 2009 to 2012 as validation set. We adopted two approaches. First, we assumed a continuous distribution for the EQ-5D index and modelled the postoperative EQ-5D index with regression models. Second, we modelled the five dimensions of the EQ-5D and weighted together the predictions using the UK Time Trade-Off value set. As predictors, we used preoperative EQ-5D dimensions and the EQ-5D index, EQ visual analogue scale, visual analogue scale pain, Charnley classification, age, gender, body mass index, American Society of Anesthesiologists, surgical approach and prosthesis type. Additionally, the tested algorithms were combined in a single predictive tool by stacking. Best predictive power was obtained by the multivariate adaptive regression splines (R 2  = 0.158). However, this was not significantly better than the predictive power of linear regressions (R 2  = 0.157). The stacked model had a predictive power of 17%. Successful implementation of a shared decision-making tool that can aid clinicians and patients in understanding expected improvement in HRQoL following THR would require higher predictive power than we achieved. For a shared decision-making tool to succeed, further variables, such as socioeconomics, need to be considered. © 2016 John Wiley & Sons, Ltd.

  8. Gear Spline Coupling Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yi; Errichello, Robert

    2013-08-29

    An analytical model is developed to evaluate the design of a spline coupling. For a given torque and shaft misalignment, the model calculates the number of teeth in contact, tooth loads, stiffnesses, stresses, and safety factors. The analytic model provides essential spline coupling design and modeling information and could be easily integrated into gearbox design and simulation tools.

  9. Design, Test, and Evaluation of a Transonic Axial Compressor Rotor with Splitter Blades

    DTIC Science & Technology

    2013-09-01

    parameters .......................................................17 Figure 13. Third-order spline fit for blade camber line distribution...18 Figure 14. Third-order spline fit for blade thickness distribution .....................................19 Figure 15. Blade...leading edge: third-order spline fit for thickness distribution ...............20 Figure 16. Blade leading edge and trailing edge slope blending

  10. Intake of different types of dairy and its prospective association with risk of type 2 diabetes: The Rotterdam Study.

    PubMed

    Brouwer-Brolsma, E M; van Woudenbergh, G J; Oude Elferink, S J W H; Singh-Povel, C M; Hofman, A; Dehghan, A; Franco, O H; Feskens, E J M

    2016-11-01

    The prevalence of type 2 diabetes (T2DM) is increasing. Several studies have suggested a beneficial effect of several major dairy nutrients on insulin production and sensitivity. Conversely, harmful effects have been suggested as well. This study aimed to investigate the impact of the full-range of dairy products and its association with incidence T2DM in Dutch adults aged ≥55 years participating in the Rotterdam Study. Dairy intake was assessed with a validated FFQ, including total, skimmed, semi-skimmed, full-fat, fermented, and non-fermented dairy, and subclasses of these product groups. Verified prevalent and incident diabetes were documented. Cox proportional hazards regression and spline regression were used to analyse data, adjusting for age, sex, alcohol, smoking, education, physical activity, body mass index, intake of total energy, energy-adjusted meat, and energy-adjusted fish intake. Median total dairy intake was 398 g/day (IQR 259-559 g/day). Through 9.5 ± 4.1 years of follow-up, 393 cases of incident T2DM were reported. Cox and spline regression did not point towards associations of total dairy consumption, dairy consumption based on fat content, non-fermented or fermented dairy consumption, or individual dairy product consumption with incident T2DM. The HR for total dairy intake and T2DM was 0.93 (95% CI: 0.70-1.23) in the upper quartile (P-for trend 0.76). This prospective cohort study did not point towards an association between dairy consumption and T2DM. Copyright © 2016 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.

  11. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    PubMed

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (P<0.001 vs. females). Overall, our study showed that developmental exposures to low-doses of bisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

  12. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1985-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  13. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images

    PubMed Central

    Wang, Yangping; Wang, Song

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653

  14. Estimation of Mangrove Forest Aboveground Biomass Using Multispectral Bands, Vegetation Indices and Biophysical Variables Derived from Optical Satellite Imageries: Rapideye, Planetscope and SENTINEL-2

    NASA Astrophysics Data System (ADS)

    Balidoy Baloloy, Alvin; Conferido Blanco, Ariel; Gumbao Candido, Christian; Labadisos Argamosa, Reginal Jay; Lovern Caboboy Dumalag, John Bart; Carandang Dimapilis, Lee, , Lady; Camero Paringit, Enrico

    2018-04-01

    Aboveground biomass estimation (AGB) is essential in determining the environmental and economic values of mangrove forests. Biomass prediction models can be developed through integration of remote sensing, field data and statistical models. This study aims to assess and compare the biomass predictor potential of multispectral bands, vegetation indices and biophysical variables that can be derived from three optical satellite systems: the Sentinel-2 with 10 m, 20 m and 60 m resolution; RapidEye with 5m resolution and PlanetScope with 3m ground resolution. Field data for biomass were collected from a Rhizophoraceae-dominated mangrove forest in Masinloc, Zambales, Philippines where 30 test plots (1.2 ha) and 5 validation plots (0.2 ha) were established. Prior to the generation of indices, images from the three satellite systems were pre-processed using atmospheric correction tools in SNAP (Sentinel-2), ENVI (RapidEye) and python (PlanetScope). The major predictor bands tested are Blue, Green and Red, which are present in the three systems; and Red-edge band from Sentinel-2 and Rapideye. The tested vegetation index predictors are Normalized Differenced Vegetation Index (NDVI), Soil-adjusted Vegetation Index (SAVI), Green-NDVI (GNDVI), Simple Ratio (SR), and Red-edge Simple Ratio (SRre). The study generated prediction models through conventional linear regression and multivariate regression. Higher coefficient of determination (r2) values were obtained using multispectral band predictors for Sentinel-2 (r2 = 0.89) and Planetscope (r2 = 0.80); and vegetation indices for RapidEye (r2 = 0.92). Multivariate Adaptive Regression Spline (MARS) models performed better than the linear regression models with r2 ranging from 0.62 to 0.92. Based on the r2 and root-mean-square errors (RMSE's), the best biomass prediction model per satellite were chosen and maps were generated. The accuracy of predicted biomass maps were high for both Sentinel-2 (r2 = 0.92) and RapidEye data (r2 = 0.91).

  15. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  16. Spline curve matching with sparse knot sets

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2004-01-01

    This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...

  17. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    PubMed

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  18. An Interactive Tool For Semi-automated Statistical Prediction Using Earth Observations and Models

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Berhane, F.; Tadesse, T.

    2015-12-01

    We developed a semi-automated statistical prediction tool applicable to concurrent analysis or seasonal prediction of any time series variable in any geographic location. The tool was developed using Shiny, JavaScript, HTML and CSS. A user can extract a predictand by drawing a polygon over a region of interest on the provided user interface (global map). The user can select the Climatic Research Unit (CRU) precipitation or Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) as predictand. They can also upload their own predictand time series. Predictors can be extracted from sea surface temperature, sea level pressure, winds at different pressure levels, air temperature at various pressure levels, and geopotential height at different pressure levels. By default, reanalysis fields are applied as predictors, but the user can also upload their own predictors, including a wide range of compatible satellite-derived datasets. The package generates correlations of the variables selected with the predictand. The user also has the option to generate composites of the variables based on the predictand. Next, the user can extract predictors by drawing polygons over the regions that show strong correlations (composites). Then, the user can select some or all of the statistical prediction models provided. Provided models include Linear Regression models (GLM, SGLM), Tree-based models (bagging, random forest, boosting), Artificial Neural Network, and other non-linear models such as Generalized Additive Model (GAM) and Multivariate Adaptive Regression Splines (MARS). Finally, the user can download the analysis steps they used, such as the region they selected, the time period they specified, the predictand and predictors they chose and preprocessing options they used, and the model results in PDF or HTML format. Key words: Semi-automated prediction, Shiny, R, GLM, ANN, RF, GAM, MARS

  19. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    PubMed

    Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.

  20. Dialectical Behavior Therapy Compared With Enhanced Usual Care for Adolescents With Repeated Suicidal and Self-Harming Behavior: Outcomes Over a One-Year Follow-Up.

    PubMed

    Mehlum, Lars; Ramberg, Maria; Tørmoen, Anita J; Haga, Egil; Diep, Lien M; Stanley, Barbara H; Miller, Alec L; Sund, Anne M; Grøholt, Berit

    2016-04-01

    We conducted a 1-year prospective follow-up study of posttreatment clinical outcomes in adolescents with recent and repetitive self-harm who had been randomly allocated to receive 19 weeks of either dialectical behavior therapy adapted for adolescents (DBT-A) or enhanced usual care (EUC) at community child and adolescent psychiatric outpatient clinics. Assessments of self-harm, suicidal ideation, depression, hopelessness, borderline symptoms, and global level of functioning were made at the end of the 19-week treatment period and at follow-up 1 year later. Altogether 75 of the 77 (97%) adolescents participated at both time points. Frequencies of hospitalizations, emergency department visits and other use of mental health care during the 1-year follow-up period were recorded. Change analyses were performed using mixed effects linear spline regression and mixed effect Poisson regression with robust variance. Over the 52-week follow-up period, DBT-A remained superior to EUC in reducing the frequency of self-harm. For other outcomes such as suicidal ideation, hopelessness, and depressive or borderline symptoms and for the global level of functioning, inter-group differences apparent at the 19-week assessment were no longer observed, mainly due to participants in the EUC group having significantly improved on these dimensions over the follow-up year, whereas DBT-A participants remained unchanged. A stronger long-term reduction in self-harm and a more rapid recovery in suicidal ideation, depression, and borderline symptoms suggest that DBT-A may be a favorable treatment alternative for adolescents with repetitive self-harming behavior. Treatment for Adolescents With Deliberate Self Harm; http://clinicaltrials.gov/; NCT00675129. Copyright © 2016 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. The impact of alcohol policies on alcohol-attributable diseases in Taiwan-A population-based study.

    PubMed

    Ying, Yung-Hsiang; Weng, Yung-Ching; Chang, Koyin

    2017-11-01

    Taiwan has some of the strictest alcohol-related driving laws in the world. However, its laws continue to be toughened to reduce the ever-increasing social cost of alcohol-related harm. This study assumes that alcohol-related driving laws show a spillover effect such that behavioral changes originally meant to apply behind the wheel come to affect drinking behavior in other contexts. The effects of alcohol driving laws and taxes on alcohol-related morbidity are assessed; incidence rates of alcohol-attributable diseases (AAD) serve as our measure of morbidity. Monthly incidence rates of alcohol-attributable diseases were calculated with data from the National Health Insurance Research Database (NHIRD) from 1996 to 2011. These rates were then submitted to intervention analyses using Seasonal Autoregressive Integrated Moving Average models (ARIMA) with multivariate adaptive regression splines (MARS). ARIMA is well-suited to time series analysis while MARS helps fit the regression model to the cubic curvature form of the irregular AAD incidence rates of hospitalization (AIRH). Alcoholic liver disease, alcohol abuse and dependence syndrome, and alcohol psychoses were the most common AADs in Taiwan. Compared to women, men had a higher incidence of AADs and their AIRH were more responsive to changes in the laws governing permissible blood alcohol. The adoption of tougher blood alcohol content (BAC) laws had significant effects on AADs, controlling for overall consumption of alcoholic beverages. Blood alcohol level laws and alcohol taxation effectively reduced alcohol-attributable morbidities with the exception of alcohol dependence and abuse, a disease to which middle-aged, lower income people are particularly susceptible. Attention should be focused on this cohort to protect this vulnerable population. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Integrating remote sensing with species distribution models; Mapping tamarisk invasions using the Software for Assisted Habitat Modeling (SAHM)

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-01-01

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  3. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres—Focus on Feature Selection

    PubMed Central

    Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  4. A comparison between ten advanced and soft computing models for groundwater qanat potential assessment in Iran using R and GIS

    NASA Astrophysics Data System (ADS)

    Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Abbaspour, Karim

    2018-02-01

    Considering the unstable condition of water resources in Iran and many other countries in arid and semi-arid regions, groundwater studies are very important. Therefore, the aim of this study is to model groundwater potential by qanat locations as indicators and ten advanced and soft computing models applied to the Beheshtabad Watershed, Iran. Qanat is a man-made underground construction which gathers groundwater from higher altitudes and transmits it to low land areas where it can be used for different purposes. For this purpose, at first, the location of the qanats was detected using extensive field surveys. These qanats were classified into two datasets including training (70%) and validation (30%). Then, 14 influence factors depicting the region's physical, morphological, lithological, and hydrological features were identified to model groundwater potential. Linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), flexible discriminant analysis (FDA), penalized discriminant analysis (PDA), boosted regression tree (BRT), random forest (RF), artificial neural network (ANN), K-nearest neighbor (KNN), multivariate adaptive regression splines (MARS), and support vector machine (SVM) models were applied in R scripts to produce groundwater potential maps. For evaluation of the performance accuracies of the developed models, ROC curve and kappa index were implemented. According to the results, RF had the best performance, followed by SVM and BRT models. Our results showed that qanat locations could be used as a good indicator for groundwater potential. Furthermore, altitude, slope, plan curvature, and profile curvature were found to be the most important influence factors. On the other hand, lithology, land use, and slope aspect were the least significant factors. The methodology in the current study could be used by land use and terrestrial planners and water resource managers to reduce the costs of groundwater resource discovery.

  5. Integrating Remote Sensing with Species Distribution Models; Mapping Tamarisk Invasions Using the Software for Assisted Habitat Modeling (SAHM).

    PubMed

    West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-10-11

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  6. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  7. Spectroscopic ellipsometry data inversion using constrained splines and application to characterization of ZnO with various morphologies

    NASA Astrophysics Data System (ADS)

    Gilliot, Mickaël; Hadjadj, Aomar; Stchakovsky, Michel

    2017-11-01

    An original method of ellipsometric data inversion is proposed based on the use of constrained splines. The imaginary part of the dielectric function is represented by a series of splines, constructed with particular constraints on slopes at the node boundaries to avoid well-know oscillations of natural splines. The nodes are used as fit parameters. The real part is calculated using Kramers-Kronig relations. The inversion can be performed in successive inversion steps with increasing resolution. This method is used to characterize thin zinc oxide layers obtained by a sol-gel and spin-coating process, with a particular recipe yielding very thin layers presenting nano-porosity. Such layers have particular optical properties correlated with thickness, morphological and structural properties. The use of the constrained spline method is particularly efficient for such materials which may not be easily represented by standard dielectric function models.

  8. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  9. Establishing the Learning Curve of Robotic Sacral Colpopexy in a Start-up Robotics Program.

    PubMed

    Sharma, Shefali; Calixte, Rose; Finamore, Peter S

    2016-01-01

    To determine the learning curve of the following segments of a robotic sacral colpopexy: preoperative setup, operative time, postoperative transition, and room turnover. A retrospective cohort study to determine the number of cases needed to reach points of efficiency in the various segments of a robotic sacral colpopexy (Canadian Task Force II-2). A university-affiliated community hospital. Women who underwent robotic sacral colpopexy at our institution from 2009 to 2013 comprise the study population. Patient characteristics and operative reports were extracted from a patient database that has been maintained since the inception of the robotics program at Winthrop University Hospital and electronic medical records. Based on additional procedures performed, 4 groups of patients were created (A-D). Learning curves for each of the segment times of interest were created using penalized basis spline (B-spline) regression. Operative time was further analyzed using an inverse curve and sequential grouping. A total of 176 patients were eligible. Nonparametric tests detected no difference in procedure times between the 4 groups (A-D) of patients. The preoperative and postoperative points of efficiency were 108 and 118 cases, respectively. The operative points of proficiency and efficiency were 25 and 36 cases, respectively. Operative time was further analyzed using an inverse curve that revealed that after 11 cases the surgeon had reached 90% of the learning plateau. Sequential grouping revealed no significant improvement in operative time after 60 cases. Turnover time could not be assessed because of incomplete data. There is a difference in the operative time learning curve for robotic sacral colpopexy depending on the statistical analysis used. The learning curve of the operative segment showed an improvement in operative time between 25 and 36 cases when using B-spline regression. When the data for operative time was fit to an inverse curve, a learning rate of 11 cases was appreciated. Using sequential grouping to describe the data, no improvement in operative time was seen after 60 cases. Ultimately, we believe that efficiency in operative time is attained after 30 to 60 cases when performing robotic sacral colpopexy. The learning curve for preoperative setup and postoperative transition, which is reflective of anesthesia and nursing staff, was approximately 110 cases. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.

  10. Adaptive Multilinear Tensor Product Wavelets

    DOE PAGES

    Weiss, Kenneth; Lindstrom, Peter

    2015-08-12

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how tomore » generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. In conclusion, we focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.« less

  11. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  12. Ship Detection and Measurement of Ship Motion by Multi-Aperture Synthetic Aperture Radar

    DTIC Science & Technology

    2014-06-01

    Reconstructed periodic components of the Doppler histories shown in Fig. 27, (b) splined harmonic component amplitudes as a function of range...78 Figure 42: (a) Reconstructed periodic components of the Doppler histories shown in Figure 30, (b) Splined amplitudes of the...Figure 29 (b) Splined amplitudes of the harmonic components. ............................................ 79 Figure 44: Ship focusing by standard

  13. Interactive Exploration of Big Scientific Data: New Representations and Techniques.

    PubMed

    Hjelmervik, Jon M; Barrowclough, Oliver J D

    2016-01-01

    Although splines have been in popular use in CAD for more than half a century, spline research is still an active field, driven by the challenges we are facing today within isogeometric analysis and big data. Splines are likely to play a vital future role in enabling effective big data exploration techniques in 3D, 4D, and beyond.

  14. Fast reversible wavelet image compressor

    NASA Astrophysics Data System (ADS)

    Kim, HyungJun; Li, Ching-Chung

    1996-10-01

    We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.

  15. An Isogeometric Design-through-analysis Methodology based on Adaptive Hierarchical Refinement of NURBS, Immersed Boundary Methods, and T-spline CAD Surfaces

    DTIC Science & Technology

    2012-01-22

    Computational Mechanics, 2008; 43:3–37. [15] Bazilevs Y, Hsu MC, Kiendl J, Wuechner R, Bletzinger KU. 3D Simulation of Wind Turbine Rotors at Full Scale. Part II...0 and Ψy = 0 on the left, right and bottom boundaries (“no slip ” requirement), Ψx = 0 and Ψx = 1 on the top boundary (the driven surface). At all...superposition of tensile membrane and bending stress, the maximum von Mises stress occurs at the sharp reentrant bend, where the loaded boundary ring bends

  16. Optimal Number and Allocation of Data Collection Points for Linear Spline Growth Curve Modeling: A Search for Efficient Designs

    ERIC Educational Resources Information Center

    Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.

    2017-01-01

    Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…

  17. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    NASA Astrophysics Data System (ADS)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  18. Spline curve matching with sparse knot sets: applications to deformable shape detection and recognition

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2003-01-01

    Splines can be used to approximate noisy data with a few control points. This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of...

  19. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  20. Statistical-learning strategies generate only modestly performing predictive models for urinary symptoms following external beam radiotherapy of the prostate: A comparison of conventional and machine-learning methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yahya, Noorazrul, E-mail: noorazrul.yahya@research.uwa.edu.au; Ebert, Martin A.; Bulsara, Max

    Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥more » 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions: Logistic regression and MARS were most likely to be the best-performing strategy for the prediction of urinary symptoms with elastic-net and random forest producing competitive results. The predictive power of the models was modest and endpoint-dependent. New features, including spatial dose maps, may be necessary to achieve better models.« less

  1. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  2. Development of quadrilateral spline thin plate elements using the B-net method

    NASA Astrophysics Data System (ADS)

    Chen, Juan; Li, Chong-Jun

    2013-08-01

    The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.

  3. A Novel Model to Simulate Flexural Complements in Compliant Sensor Systems

    PubMed Central

    Tang, Hongyan; Zhang, Dan; Guo, Sheng; Qu, Haibo

    2018-01-01

    The main challenge in analyzing compliant sensor systems is how to calculate the large deformation of flexural complements. Our study proposes a new model that is called the spline pseudo-rigid-body model (spline PRBM). It combines dynamic spline and the pseudo-rigid-body model (PRBM) to simulate the flexural complements. The axial deformations of flexural complements are modeled by using dynamic spline. This makes it possible to consider the nonlinear compliance of the system using four control points. Three rigid rods connected by two revolute (R) pins with two torsion springs replace the three lines connecting the four control points. The kinematic behavior of the system is described using Lagrange equations. Both the optimization and the numerical fitting methods are used for resolving the characteristic parameters of the new model. An example is given of a compliant mechanism to modify the accuracy of the model. The spline PRBM is important in expanding the applications of the PRBM to the design and simulation of flexural force sensors. PMID:29596377

  4. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    NASA Technical Reports Server (NTRS)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  5. Concentration-response of short-term ozone exposure and hospital admissions for asthma in Texas.

    PubMed

    Zu, Ke; Liu, Xiaobin; Shi, Liuhua; Tao, Ge; Loftus, Christine T; Lange, Sabine; Goodman, Julie E

    2017-07-01

    Short-term exposure to ozone has been associated with asthma hospital admissions (HA) and emergency department (ED) visits, but the shape of the concentration-response (C-R) curve is unclear. We conducted a time series analysis of asthma HAs and ambient ozone concentrations in six metropolitan areas in Texas from 2001 to 2013. Using generalized linear regression models, we estimated the effect of daily 8-hour maximum ozone concentrations on asthma HAs for all ages combined, and for those aged 5-14, 15-64, and 65+years. We fit penalized regression splines to evaluate the shape of the C-R curves. Using a log-linear model, estimated risk per 10ppb increase in average daily 8-hour maximum ozone concentrations was highest for children (relative risk [RR]=1.047, 95% confidence interval [CI]: 1.025-1.069), lower for younger adults (RR=1.018, 95% CI: 1.005-1.032), and null for older adults (RR=1.002, 95% CI: 0.981-1.023). However, penalized spline models demonstrated significant nonlinear C-R relationships for all ages combined, children, and younger adults, indicating the existence of thresholds. We did not observe an increased risk of asthma HAs until average daily 8-hour maximum ozone concentrations exceeded approximately 40ppb. Ozone and asthma HAs are significantly associated with each other; susceptibility to ozone is age-dependent, with children at highest risk. C-R relationships between average daily 8-hour maximum ozone concentrations and asthma HAs are significantly curvilinear for all ages combined, children, and younger adults. These nonlinear relationships, as well as the lack of relationship between average daily 8-hour maximum and peak ozone concentrations, have important implications for assessing risks to human health in regulatory settings. Copyright © 2017. Published by Elsevier Ltd.

  6. Cannabis smoking and lung cancer risk: Pooled analysis in the International Lung Cancer Consortium

    PubMed Central

    Zhang, Li Rita; Morgenstern, Hal; Greenland, Sander; Chang, Shen-Chih; Lazarus, Philip; Teare, M. Dawn; Woll, Penella J.; Orlow, Irene; Cox, Brian; Brhane, Yonathan; Liu, Geoffrey; Hung, Rayjean J.

    2014-01-01

    To investigate the association between cannabis smoking and lung cancer risk, data on 2,159 lung cancer cases and 2,985 controls were pooled from 6 case-control studies in the US, Canada, UK, and New Zealand within the International Lung Cancer Consortium. Study-specific associations between cannabis smoking and lung cancer were estimated using unconditional logistic regression adjusting for sociodemographic factors, tobacco smoking status and pack-years; odds-ratio estimates were pooled using random effects models. Subgroup analyses were done for sex, histology and tobacco smoking status. The shapes of dose-response associations were examined using restricted cubic spline regression. The overall pooled OR for habitual versus nonhabitual or never users was 0.96 (95% CI: 0.66–1.38). Compared to nonhabitual or never users, the summary OR was 0.88 (95%CI: 0.63–1.24) for individuals who smoked 1 or more joint-equivalents of cannabis per day and 0.94 (95%CI: 0.67–1.32) for those consumed at least 10 joint-years. For adenocarcinoma cases the ORs were 1.73 (95%CI: 0.75–4.00) and 1.74 (95%CI: 0.85–3.55), respectively. However, no association was found for the squamous cell carcinoma based on small numbers. Weak associations between cannabis smoking and lung cancer were observed in never tobacco smokers. Spline modeling indicated a weak positive monotonic association between cumulative cannabis use and lung cancer, but precision was low at high exposure levels. Results from our pooled analyses provide little evidence for an increased risk of lung cancer among habitual or long-term cannabis smokers, although the possibility of potential adverse effect for heavy consumption cannot be excluded. PMID:24947688

  7. Curvilinear associations of sleep patterns during weekdays and weekends with glycemic control in type 2 diabetes: the Hong Kong Diabetes Registry.

    PubMed

    Kong, Alice P S; Choi, Kai Chow; Zhang, Jihui; Luk, Andrea; Lam, Siu Ping; Chan, Michael H M; Ma, Ronald C W; Chan, Juliana C N; Wing, Yun Kwok

    2017-02-01

    We aimed to explore the associations of sleep patterns during weekdays and weekends with glycemic control in patients with type 2 diabetes. We examined the association between indices of glycemic control [glycated hemoglobin (HbA 1c ) and fasting plasma glucose (FPG)] and sleep parameters (sleep duration, bedtime, and differences of sleep duration during weekdays and weekends) from adults with type 2 diabetes recruited in a prospective cohort enrolling from hospital medical clinics. Restricted cubic spline regression was used to examine the relationships between the glycemic indices and sleep parameters. Excluding shift workers, a total of 3508 patients enrolled between July 2010 and July 2014 were included in this analysis. Mean age was 53.9 [standard deviation (SD) 8.7] years, and mean disease duration of diabetes was 8.3 (SD 7.1) years. Fifty-nine percentage were men. Mean sleep duration during weekdays and difference of sleep durations between weekdays and weekends were 7.7 (SD 1.3) hours and 0.6 (SD 1.2) hours, respectively. Mean HbA 1c and FPG were 7.6 (1.5) % and 7.6 (2.5) mmol/L, respectively. Using restricted cubic spline regressions with successive adjustments of potential confounders, sleep duration difference between weekdays and weekends remained significantly associated with both HbA 1c and FPG in a curvilinear manner. Sleep duration of about 1 h more during weekends when compared to weekdays was associated with beneficial effect in HbA 1c (-0.13 %, 95 % confidence interval -0.24 to -0.02). In type 2 diabetes, regular sleeping habit with modest sleep compensation during weekends has positive impact on glycemic control.

  8. SinCHet: a MATLAB toolbox for single cell heterogeneity analysis in cancer.

    PubMed

    Li, Jiannong; Smalley, Inna; Schell, Michael J; Smalley, Keiran S M; Chen, Y Ann

    2017-09-15

    Single-cell technologies allow characterization of transcriptomes and epigenomes for individual cells under different conditions and provide unprecedented resolution for researchers to investigate cellular heterogeneity in cancer. The SinCHet ( gle ell erogeneity) toolbox is developed in MATLAB and has a graphical user interface (GUI) for visualization and user interaction. It analyzes both continuous (e.g. mRNA expression) and binary omics data (e.g. discretized methylation data). The toolbox does not only quantify cellular heterogeneity using S hannon P rofile (SP) at different clonal resolutions but also detects heterogeneity differences using a D statistic between two populations. It is defined as the area under the P rofile of S hannon D ifference (PSD). This flexible tool provides a default clonal resolution using the change point of PSD detected by multivariate adaptive regression splines model; it also allows user-defined clonal resolutions for further investigation. This tool provides insights into emerging or disappearing clones between conditions, and enables the prioritization of biomarkers for follow-up experiments based on heterogeneity or marker differences between and/or within cell populations. The SinCHet software is freely available for non-profit academic use. The source code, example datasets, and the compiled package are available at http://labpages2.moffitt.org/chen/software/ . ann.chen@moffitt.org. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  9. Modeled distribution and abundance of a pelagic seabird reveal trends in relation to fisheries

    USGS Publications Warehouse

    Renner, Martin; Parrish, Julia K.; Piatt, John F.; Kuletz, Kathy J.; Edwards, Ann E.; Hunt, George L.

    2013-01-01

    The northern fulmar Fulmarus glacialis is one of the most visible and widespread seabirds in the eastern Bering Sea and Aleutian Islands. However, relatively little is known about its abundance, trends, or the factors that shape its distribution. We used a long-term pelagic dataset to model changes in fulmar at-sea distribution and abundance since the mid-1970s. We used an ensemble model, based on a weighted average of generalized additive model (GAM), multivariate adaptive regression splines (MARS), and random forest models to estimate the pelagic distribution and density of fulmars in the waters of the Aleutian Archipelago and Bering Sea. The most important predictor variables were colony effect, sea surface temperature, distribution of fisheries, location, and primary productivity. We calculated a time series from the ratio of observed to predicted values and found that fulmar at-sea abundance declined from the 1970s to the 2000s at a rate of 0.83% (± 0.39% SE) per annum. Interpolating fulmar densities on a spatial grid through time, we found that the center of fulmar distribution in the Bering Sea has shifted north, coinciding with a northward shift in fish catches and a warming ocean. Our study shows that fisheries are an important, but not the only factor, shaping fulmar distribution and abundance trends in the eastern Bering Sea and Aleutian Islands.

  10. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  11. Imaging Freeform Optical Systems Designed with NURBS Surfaces

    DTIC Science & Technology

    2015-12-01

    reflective, anastigmat 1 Introduction The imaging freeform optical systems described here are designed using non-uniform rational basis -spline (NURBS...from piecewise splines. Figure 1 shows a third degree NURBS surface which is formed from cubic basis splines. The surface is defined by the set of...with mathematical details covered by Piegl and Tiller7. Compare this with Gaussian basis functions8 where it is challenging to provide smooth

  12. Multivariate spline methods in surface fitting

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator); Schumaker, L. L.

    1984-01-01

    The use of spline functions in the development of classification algorithms is examined. In particular, a method is formulated for producing spline approximations to bivariate density functions where the density function is decribed by a histogram of measurements. The resulting approximations are then incorporated into a Bayesiaan classification procedure for which the Bayes decision regions and the probability of misclassification is readily computed. Some preliminary numerical results are presented to illustrate the method.

  13. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yi; Keller, Jonathan; Errichello, Robert

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliabilitymore » using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.« less

  14. Modeling respiratory mechanics in the MCAT and spline-based MCAT phantoms

    NASA Astrophysics Data System (ADS)

    Segars, W. P.; Lalush, D. S.; Tsui, B. M. W.

    2001-02-01

    Respiratory motion can cause artifacts in myocardial SPECT and computed tomography (CT). The authors incorporate models of respiratory mechanics into the current 4D MCAT and into the next generation spline-based MCAT phantoms. In order to simulate respiratory motion in the current MCAT phantom, the geometric solids for the diaphragm, heart, ribs, and lungs were altered through manipulation of parameters defining them. Affine transformations were applied to the control points defining the same respiratory structures in the spline-based MCAT phantom to simulate respiratory motion. The Non-Uniform Rational B-Spline (NURBS) surfaces for the lungs and body outline were constructed in such a way as to be linked to the surrounding ribs. Expansion and contraction of the thoracic cage then coincided with expansion and contraction of the lungs and body. The changes both phantoms underwent were spline-interpolated over time to create time continuous 4D respiratory models. The authors then used the geometry-based and spline-based MCAT phantoms in an initial simulation study of the effects of respiratory motion on myocardial SPECT. The simulated reconstructed images demonstrated distinct artifacts in the inferior region of the myocardium. It is concluded that both respiratory models can be effective tools for researching effects of respiratory motion.

  15. An isogeometric boundary element method for electromagnetic scattering with compatible B-spline discretizations

    NASA Astrophysics Data System (ADS)

    Simpson, R. N.; Liu, Z.; Vázquez, R.; Evans, J. A.

    2018-06-01

    We outline the construction of compatible B-splines on 3D surfaces that satisfy the continuity requirements for electromagnetic scattering analysis with the boundary element method (method of moments). Our approach makes use of Non-Uniform Rational B-splines to represent model geometry and compatible B-splines to approximate the surface current, and adopts the isogeometric concept in which the basis for analysis is taken directly from CAD (geometry) data. The approach allows for high-order approximations and crucially provides a direct link with CAD data structures that allows for efficient design workflows. After outlining the construction of div- and curl-conforming B-splines defined over 3D surfaces we describe their use with the electric and magnetic field integral equations using a Galerkin formulation. We use Bézier extraction to accelerate the computation of NURBS and B-spline terms and employ H-matrices to provide accelerated computations and memory reduction for the dense matrices that result from the boundary integral discretization. The method is verified using the well known Mie scattering problem posed over a perfectly electrically conducting sphere and the classic NASA almond problem. Finally, we demonstrate the ability of the approach to handle models with complex geometry directly from CAD without mesh generation.

  16. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts.

    PubMed

    Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2016-10-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. © The Author(s) 2013.

  17. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts

    PubMed Central

    Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S.; Barros, Aluísio JD; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2013-01-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. PMID:24108269

  18. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  19. Effects of early activator treatment in patients with class II malocclusion evaluated by thin-plate spline analysis.

    PubMed

    Lux, C J; Rübel, J; Starke, J; Conradt, C; Stellzig, P A; Komposch, P G

    2001-04-01

    The aim of the present longitudinal cephalometric study was to evaluate the dentofacial shape changes induced by activator treatment between 9.5 and 11.5 years in male Class II patients. For a rigorous morphometric analysis, a thin-plate spline analysis was performed to assess and visualize dental and skeletal craniofacial changes. Twenty male patients with a skeletal Class II malrelationship and increased overjet who had been treated at the University of Heidelberg with a modified Andresen-Häupl-type activator were compared with a control group of 15 untreated male subjects of the Belfast Growth Study. The shape changes for each group were visualized on thin-plate splines with one spline comprising all 13 landmarks to show all the craniofacial shape changes, including skeletal and dento-alveolar reactions, and a second spline based on 7 landmarks to visualize only the skeletal changes. In the activator group, the grid deformation of the total spline pointed to a strong activator-induced reduction of the overjet that was caused both by a tipping of the incisors and by a moderation of sagittal discrepancies, particularly a slight advancement of the mandible. In contrast with this, in the control group, only slight localized shape changes could be detected. Both in the 7- and 13-landmark configurations, the shape changes between the groups differed significantly at P < .001. In the present study, the morphometric approach of thin-plate spline analysis turned out to be a useful morphometric supplement to conventional cephalometrics because the complex patterns of shape change could be suggestively visualized.

  20. Enhancement of surface definition and gridding in the EAGLE code

    NASA Technical Reports Server (NTRS)

    Thompson, Joe F.

    1991-01-01

    Algorithms for smoothing of curves and surfaces for the EAGLE grid generation program are presented. The method uses an existing automated technique which detects undesirable geometric characteristics by using a local fairness criterion. The geometry entity is then smoothed by repeated removal and insertion of spline knots in the vicinity of the geometric irregularity. The smoothing algorithm is formulated for use with curves in Beta spline form and tensor product B-spline surfaces.

  1. Isogeometric Analysis of Boundary Integral Equations

    DTIC Science & Technology

    2015-04-21

    methods, IgA relies on Non-Uniform Rational B- splines (NURBS) [43, 46], T- splines [55, 53] or subdivision surfaces [21, 48, 51] rather than piece- wise...structural dynamics [25, 26], plates and shells [15, 16, 27, 28, 37, 22, 23], phase-field models [17, 32, 33], and shape optimization [40, 41, 45, 59...polynomials for approximating the geometry and field variables. Thus, by replacing piecewise polynomials with NURBS or T- splines , one can develop

  2. Validating the Kinematic Wave Approach for Rapid Soil Erosion Assessment and Improved BMP Site Selection to Enhance Training Land Sustainability

    DTIC Science & Technology

    2014-02-01

    installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND

  3. A cubic spline approximation for problems in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  4. Chronological Age, Cognitions, and Practices in European American Mothers: A Multivariate Study of Parenting

    PubMed Central

    Bornstein, Marc H.; Putnick, Diane L.

    2018-01-01

    We studied multiple parenting cognitions and practices in European American mothers (N = 262) who ranged in age from 15 to 47 years. All were first-time parents of 20-month-old children. Some age effects were zero; others were linear or nonlinear. Nonlinear age effects determined by spline regression showed significant associations to a “knot” age (~30 years) with little or no association afterward. For parenting cognitions and practices that are age-sensitive, a two-phase model of parental development is proposed. These findings stress the importance of considering maternal chronological age as a factor in developmental study. PMID:17605519

  5. Acute myeloid and chronic lymphoid leukaemias and exposure to low-level benzene among petroleum workers

    PubMed Central

    Rushton, L; Schnatter, A R; Tang, G; Glass, D C

    2014-01-01

    Background: High benzene exposure causes acute myeloid leukaemia (AML). Three petroleum case–control studies identified 60 cases (241 matched controls) for AML and 80 cases (345 matched controls) for chronic lymphoid leukaemia (CLL). Methods: Cases were classified and scored regarding uncertainty by two haematologists using available diagnostic information. Blinded quantitative benzene exposure assessment used work histories and exposure measurements adjusted for era-specific circumstances. Statistical analyses included conditional logistic regression and penalised smoothing splines. Results: Benzene exposures were much lower than previous studies. Categorical analyses showed increased ORs for AML with several exposure metrics, although patterns were unclear; neither continuous exposure metrics nor spline analyses gave increased risks. ORs were highest in terminal workers, particularly for Tanker Drivers. No relationship was found between benzene exposure and risk of CLL, although the Australian study showed increased risks in refinery workers. Conclusion: Overall, this study does not persuasively demonstrate a risk between benzene and AML. A previously reported strong relationship between myelodysplastic syndrome (MDS) (potentially previously reported as AML) at our study's low benzene levels suggests that MDS may be the more relevant health risk for lower exposure. Higher CLL risks in refinery workers may be due to more diverse exposures than benzene alone. PMID:24357793

  6. A varying-coefficient method for analyzing longitudinal clinical trials data with nonignorable dropout

    PubMed Central

    Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane

    2011-01-01

    Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223

  7. Near-Infrared Collisional Radiative Model for Xe Plasma Electrostatic Thrusters: The Role of Metastable Atoms

    DTIC Science & Technology

    2009-08-01

    the measurements of Jung et al [3], ’BSR’ to the Breit- Pauli B-Spline ft-matrix method, and ’RDW to the relativistic distorted wave method. low...excitation cross sections using both relativistic distorted wave and semi-relativistic Breit- Pauli B-Spline R-matrix methods is presented. The model...population and line intensity enhancement. 15. SUBJECT TERMS Metastable xenon Electrostatic thruster Relativistic Breit- Pauli b-spline matrix

  8. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  9. RATIONAL SPLINE SUBROUTINES

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1994-01-01

    Scientific data often contains random errors that make plotting and curve-fitting difficult. The Rational-Spline Approximation with Automatic Tension Adjustment algorithm lead to a flexible, smooth representation of experimental data. The user sets the conditions for each consecutive pair of knots:(knots are user-defined divisions in the data set) to apply no tension; to apply fixed tension; or to determine tension with a tension adjustment algorithm. The user also selects the number of knots, the knot abscissas, and the allowed maximum deviations from line segments. The selection of these quantities depends on the actual data and on the requirements of a particular application. This program differs from the usual spline under tension in that it allows the user to specify different tension values between each adjacent pair of knots rather than a constant tension over the entire data range. The subroutines use an automatic adjustment scheme that varies the tension parameter for each interval until the maximum deviation of the spline from the line joining the knots is less than or equal to a user-specified amount. This procedure frees the user from the drudgery of adjusting individual tension parameters while still giving control over the local behavior of the spline The Rational Spline program was written completely in FORTRAN for implementation on a CYBER 850 operating under NOS. It has a central memory requirement of approximately 1500 words. The program was released in 1988.

  10. [An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].

    PubMed

    Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu

    2016-04-01

    The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.

  11. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE PAGES

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-25

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  12. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krogel, Jaron T.; Reboredo, Fernando A.

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  13. Thin-plate spline quadrature of geodetic integrals

    NASA Technical Reports Server (NTRS)

    Vangysen, Herman

    1989-01-01

    Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.

  14. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  15. Regression approaches in the test-negative study design for assessment of influenza vaccine effectiveness.

    PubMed

    Bond, H S; Sullivan, S G; Cowling, B J

    2016-06-01

    Influenza vaccination is the most practical means available for preventing influenza virus infection and is widely used in many countries. Because vaccine components and circulating strains frequently change, it is important to continually monitor vaccine effectiveness (VE). The test-negative design is frequently used to estimate VE. In this design, patients meeting the same clinical case definition are recruited and tested for influenza; those who test positive are the cases and those who test negative form the comparison group. When determining VE in these studies, the typical approach has been to use logistic regression, adjusting for potential confounders. Because vaccine coverage and influenza incidence change throughout the season, time is included among these confounders. While most studies use unconditional logistic regression, adjusting for time, an alternative approach is to use conditional logistic regression, matching on time. Here, we used simulation data to examine the potential for both regression approaches to permit accurate and robust estimates of VE. In situations where vaccine coverage changed during the influenza season, the conditional model and unconditional models adjusting for categorical week and using a spline function for week provided more accurate estimates. We illustrated the two approaches on data from a test-negative study of influenza VE against hospitalization in children in Hong Kong which resulted in the conditional logistic regression model providing the best fit to the data.

  16. Genetic analyses of stillbirth in relation to litter size using random regression models.

    PubMed

    Chen, C Y; Misztal, I; Tsuruta, S; Herring, W O; Holl, J; Culbertson, M

    2010-12-01

    Estimates of genetic parameters for number of stillborns (NSB) in relation to litter size (LS) were obtained with random regression models (RRM). Data were collected from 4 purebred Duroc nucleus farms between 2004 and 2008. Two data sets with 6,575 litters for the first parity (P1) and 6,259 litters for the second to fifth parity (P2-5) with a total of 8,217 and 5,066 animals in the pedigree were analyzed separately. Number of stillborns was studied as a trait on sow level. Fixed effects were contemporary groups (farm-year-season) and fixed cubic regression coefficients on LS with Legendre polynomials. Models for P2-5 included the fixed effect of parity. Random effects were additive genetic effects for both data sets with permanent environmental effects included for P2-5. Random effects modeled with Legendre polynomials (RRM-L), linear splines (RRM-S), and degree 0 B-splines (RRM-BS) with regressions on LS were used. For P1, the order of polynomial, the number of knots, and the number of intervals used for respective models were quadratic, 3, and 3, respectively. For P2-5, the same parameters were linear, 2, and 2, respectively. Heterogeneous residual variances were considered in the models. For P1, estimates of heritability were 12 to 15%, 5 to 6%, and 6 to 7% in LS 5, 9, and 13, respectively. For P2-5, estimates were 15 to 17%, 4 to 5%, and 4 to 6% in LS 6, 9, and 12, respectively. For P1, average estimates of genetic correlations between LS 5 to 9, 5 to 13, and 9 to 13 were 0.53, -0.29, and 0.65, respectively. For P2-5, same estimates averaged for RRM-L and RRM-S were 0.75, -0.21, and 0.50, respectively. For RRM-BS with 2 intervals, the correlation was 0.66 between LS 5 to 7 and 8 to 13. Parameters obtained by 3 RRM revealed the nonlinear relationship between additive genetic effect of NSB and the environmental deviation of LS. The negative correlations between the 2 extreme LS might possibly indicate different genetic bases on incidence of stillbirth.

  17. Intelligent manipulation technique for multi-branch robotic systems

    NASA Technical Reports Server (NTRS)

    Chen, Alexander Y. K.; Chen, Eugene Y. S.

    1990-01-01

    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.

  18. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  19. Quadratures with multiple nodes, power orthogonality, and moment-preserving spline approximation

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.

    2001-01-01

    Quadrature formulas with multiple nodes, power orthogonality, and some applications of such quadratures to moment-preserving approximation by defective splines are considered. An account on power orthogonality (s- and [sigma]-orthogonal polynomials) and generalized Gaussian quadratures with multiple nodes, including stable algorithms for numerical construction of the corresponding polynomials and Cotes numbers, are given. In particular, the important case of Chebyshev weight is analyzed. Finally, some applications in moment-preserving approximation of functions by defective splines are discussed.

  20. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    NASA Technical Reports Server (NTRS)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  1. Spatial variability of soil available phosphorous and potassium at three different soils located in Pannonian Croatia

    NASA Astrophysics Data System (ADS)

    Bogunović, Igor; Pereira, Paulo; Đurđević, Boris

    2017-04-01

    Information on spatial distribution of soil nutrients in agroecosystems is critical for improving productivity and reducing environmental pressures in intensive farmed soils. In this context, spatial prediction of soil properties should be accurate. In this study we analyse 704 data of soil available phosphorus (AP) and potassium (AK); the data derive from soil samples collected across three arable fields in Baranja region (Croatia) in correspondence of different soil types: Cambisols (169 samples), Chernozems (131 samples) and Gleysoils (404 samples). The samples are collected in a regular sampling grid (distance 225 x 225 m). Several geostatistical techniques (Inverse Distance to a Weight (IDW) with the power of 1, 2 and 3; Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multiquadratic (MTQ), Completely Regularized Spline (CRS), Spline with Tension (SPT) and Thin Plate Spline (TPS); and Local Polynomial (LP) with the power of 1 and 2; two geostatistical techniques -Ordinary Kriging - OK and Simple Kriging - SK) were tested in order to evaluate the most accurate spatial variability maps using criteria of lowest RMSE during cross validation technique. Soil parameters varied considerably throughout the studied fields and their coefficient of variations ranged from 31.4% to 37.7% and from 19.3% to 27.1% for soil AP and AK, respectively. The experimental variograms indicate a moderate spatial dependence for AP and strong spatial dependence for all three locations. The best spatial predictor for AP at Chernozem field was Simple kriging (RMSE=61.711), and for AK inverse multiquadratic (RMSE=44.689). The least accurate technique was Thin plate spline (AP) and Inverse distance to a weight with a power of 1 (AK). Radial basis function models (Spline with Tension for AP at Gleysoil and Cambisol and Completely Regularized Spline for AK at Gleysol) were the best predictors, while Thin Plate Spline models were the least accurate in all three cases. The best interpolator for AK at Cambisol was the local polynomial with the power of 2 (RMSE=33.943), while the least accurate was Thin Plate Spline (RMSE=39.572).

  2. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  3. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  4. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  5. Calibrating the Decline Rate - Peak Luminosity Relation for Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Rust, Bert W.; Pruzhinskaya, Maria V.; Thijsse, Barend J.

    2015-08-01

    The correlation between peak luminosity and rate of decline in luminosity for Type I supernovae was first studied by B. W. Rust [Ph.D. thesis, Univ. of Illinois (1974) ORNL-4953] and Yu. P. Pskovskii [Sov. Astron., 21 (1977) 675] in the 1970s. Their work was little-noted until Phillips rediscovered the correlation in 1993 [ApJ, 413 (1993) L105] and attempted to derive a calibration relation using a difference quotient approximation Δm15(B) to the decline rate after peak luminosity Mmax(B). Numerical differentiation of data containing measuring errors is a notoriously unstable calculation, but Δm15(B) remains the parameter of choice for most calibration methods developed since 1993. To succeed, it should be computed from good functional fits to the lightcurves, but most workers never exhibit their fits. In the few instances where they have, the fits are not very good. Some of the 9 supernovae in the Phillips study required extinction corrections in their estimates of the Mmax(B), and so were not appropriate for establishing a calibration relation. Although the relative uncertainties in his Δm15(B) estimates were comparable to those in his Mmax(B) estimates, he nevertheless used simple linear regression of the latter on the former, rather than major-axis regression (total least squares) which would have been more appropriate.Here we determine some new calibration relations using a sample of nearby "pure" supernovae suggested by M. V. Pruzhinskaya [Astron. Lett., 37 (2011) 663]. Their parent galaxies are all in the NED collection, with good distance estimates obtained by several different methods. We fit each lightcurve with an optimal regression spline obtained by B. J. Thijsse's spline2 [Comp. in Sci. & Eng., 10 (2008) 49]. The fits, which explain more that 99% of the variance in each case, are better than anything heretofore obtained by stretching "template" lightcurves or fitting combinations of standard lightcurves. We use the fits to compute estimates of Δm15(B) and some other calibration parameters suggested by Pskovskii [Sov. Astron., 28 (1984) 858] and compare their utility for cosmological testing.

  6. Countervailing effects of income, air pollution, smoking, and obesity on aging and life expectancy: population-based study of U.S. Counties.

    PubMed

    Allen, Ryan T; Hales, Nicholas M; Baccarelli, Andrea; Jerrett, Michael; Ezzati, Majid; Dockery, Douglas W; Pope, C Arden

    2016-08-12

    Income, air pollution, obesity, and smoking are primary factors associated with human health and longevity in population-based studies. These four factors may have countervailing impacts on longevity. This analysis investigates longevity trade-offs between air pollution and income, and explores how relative effects of income and air pollution on human longevity are potentially influenced by accounting for smoking and obesity. County-level data from 2,996 U.S. counties were analyzed in a cross-sectional analysis to investigate relationships between longevity and the four factors of interest: air pollution (mean 1999-2008 PM2.5), median income, smoking, and obesity. Two longevity measures were used: life expectancy (LE) and an exceptional aging (EA) index. Linear regression, generalized additive regression models, and bivariate thin-plate smoothing splines were used to estimate the benefits of living in counties with higher incomes or lower PM2.5. Models were estimated with and without controls for smoking, obesity, and other factors. Models which account for smoking and obesity result in substantially smaller estimates of the effects of income and pollution on longevity. Linear regression models without these two variables estimate that a $1,000 increase in median income (1 μg/m(3) decrease in PM2.5) corresponds to a 27.39 (33.68) increase in EA and a 0.14 (0.12) increase in LE, whereas models that control for smoking and obesity estimate only a 12.32 (20.22) increase in EA and a 0.07 (0.05) increase in LE. Nonlinear models and thin-plate smoothing splines also illustrate that, at higher levels of income, the relative benefits of the income-pollution tradeoff changed-the benefit of higher incomes diminished relative to the benefit of lower air pollution exposure. Higher incomes and lower levels of air pollution both correspond with increased human longevity. Adjusting for smoking and obesity reduces estimates of the benefits of higher income and lower air pollution exposure. This adjustment also alters the tradeoff between income and pollution: increases in income become less beneficial relative to a fixed reduction in air pollution-especially at higher levels of income.

  7. Modeling terminal ballistics using blending-type spline surfaces

    NASA Astrophysics Data System (ADS)

    Pedersen, Aleksander; Bratlie, Jostein; Dalmo, Rune

    2014-12-01

    We explore using GERBS, a blending-type spline construction, to represent deform able thin-plates and model terminal ballistics. Strategies to construct geometry for different scenarios of terminal ballistics are proposed.

  8. Quadratic trigonometric B-spline for image interpolation using GA

    PubMed Central

    Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906

  9. Quadratic trigonometric B-spline for image interpolation using GA.

    PubMed

    Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.

  10. Illumination estimation via thin-plate spline interpolation.

    PubMed

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  11. A Comparison of Spatial Analysis Methods for the Construction of Topographic Maps of Retinal Cell Density

    PubMed Central

    Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568

  12. Digital soil classification and elemental mapping using imaging Vis-NIR spectroscopy: How to explicitly quantify stagnic properties of a Luvisol under Norway spruce

    NASA Astrophysics Data System (ADS)

    Kriegs, Stefanie; Buddenbaum, Henning; Rogge, Derek; Steffens, Markus

    2015-04-01

    Laboratory imaging Vis-NIR spectroscopy of soil profiles is a novel technique in soil science that can determine quantity and quality of various chemical soil properties with a hitherto unreached spatial resolution in undisturbed soil profiles. We have applied this technique to soil cores in order to get quantitative proof of redoximorphic processes under two different tree species and to proof tree-soil interactions at microscale. Due to the imaging capabilities of Vis-NIR spectroscopy a spatially explicit understanding of soil processes and properties can be achieved. Spatial heterogeneity of the soil profile can be taken into account. We took six 30 cm long rectangular soil columns of adjacent Luvisols derived from quaternary aeolian sediments (Loess) in a forest soil near Freising/Bavaria using stainless steel boxes (100×100×300 mm). Three profiles were sampled under Norway spruce and three under European beech. A hyperspectral camera (VNIR, 400-1000 nm in 160 spectral bands) with spatial resolution of 63×63 µm² per pixel was used for data acquisition. Reference samples were taken at representative spots and analysed for organic carbon (OC) quantity and quality with a CN elemental analyser and for iron oxides (Fe) content using dithionite extraction followed by ICP-OES measurement. We compared two supervised classification algorithms, Spectral Angle Mapper and Maximum Likelihood, using different sets of training areas and spectral libraries. As established in chemometrics we used multivariate analysis such as partial least-squares regression (PLSR) in addition to multivariate adaptive regression splines (MARS) to correlate chemical data with Vis-NIR spectra. As a result elemental mapping of Fe and OC within the soil core at high spatial resolution has been achieved. The regression model was validated by a new set of reference samples for chemical analysis. Digital soil classification easily visualizes soil properties within the soil profiles. By combining both techniques, detailed soil maps, elemental balances and a deeper understanding of soil forming processes at the microscale become feasible for complete soil profiles.

  13. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    PubMed Central

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  14. Weighted spline based integration for reconstruction of freeform wavefront.

    PubMed

    Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra

    2018-02-10

    In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.

  15. Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.

    PubMed

    de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2008-01-01

    The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).

  16. A thin-plate spline analysis of the face and tongue in obstructive sleep apnea patients.

    PubMed

    Pae, E K; Lowe, A A; Fleetham, J A

    1997-12-01

    The shape characteristics of the face and tongue in obstructive sleep apnea (OSA) patients were investigated using thin-plate (TP) splines. A relatively new analytic tool, the TP spline method, provides a means of size normalization and image analysis. When shape is one's main concern, various sizes of a biologic structure may be a source of statistical noise. More seriously, the strong size effect could mask underlying, actual attributes of the disease. A set of size normalized data in the form of coordinates was generated from cephalograms of 80 male subjects. The TP spline method envisioned the differences in the shape of the face and tongue between OSA patients and nonapneic subjects and those between the upright and supine body positions. In accordance with OSA severity, the hyoid bone and the submental region positioned inferiorly and the fourth vertebra relocated posteriorly with respect to the mandible. This caused a fanlike configuration of the lower part of the face and neck in the sagittal plane in both upright and supine body positions. TP splines revealed tongue deformations caused by a body position change. Overall, the new morphometric tool adopted here was found to be viable in the analysis of morphologic changes.

  17. Spline-Screw Multiple-Rotation Mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Mechanism functions like combined robotic gripper and nut runner. Spline-screw multiple-rotation mechanism related to spline-screw payload-fastening system described in (GSC-13454). Incorporated as subsystem in alternative version of system. Mechanism functions like combination of robotic gripper and nut runner; provides both secure grip and rotary actuation of other parts of system. Used in system in which no need to make or break electrical connections to payload during robotic installation or removal of payload. More complicated version needed to make and break electrical connections. Mechanism mounted in payload.

  18. Vector splines on the sphere with application to the estimation of vorticity and divergence from discrete, noisy data

    NASA Technical Reports Server (NTRS)

    Wahba, G.

    1982-01-01

    Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.

  19. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  20. Volumetric T-spline Construction Using Boolean Operations

    DTIC Science & Technology

    2013-07-01

    SUBTITLE Volumetric T-spline Construction Using Boolean Operations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Acknowledgements The work of L. Liu and Y. Zhang was supported by ONR-YIP award N00014- 10-1-0698 and an ONR Grant N00014-08-1-0653. T. J.R. Hughes was sup- 16...T-spline Construction Using Boolean Operations 17 ported by ONR Grant N00014-08-1-0992, NSF GOALI CMI-0700807/0700204, NSF CMMI-1101007 and a SINTEF

  1. Design and Delivery of HMT Half-Shaft Prototype

    DTIC Science & Technology

    2012-11-01

    spindle welded to the outer joint output is ease of Design  and Delivery of HMT Half‐ Shaft  Prototype    24    assembly. Flange 1 contains threaded... spindle , and splined shafts . Also, the spindle of the production design is splined to match the splines of the hub internals. 2.2. Analysis The...inner-joint (Figure 33). Design  and Delivery of HMT Half‐ Shaft  Prototype    27      Figure 33: FBD of Flange/ Spindle Applying Newton’s Laws to the

  2. Internal Friction And Instabilities Of Rotors

    NASA Technical Reports Server (NTRS)

    Walton, J.; Artiles, A.; Lund, J.; Dill, J.; Zorzi, E.

    1992-01-01

    Report describes study of effects of internal friction on dynamics of rotors prompted by concern over instabilities in rotors of turbomachines. Theoretical and experimental studies described. Theoretical involved development of nonlinear mathematical models of internal friction in three joints found in turbomachinery - axial splines, Curvic(TM) splines, and interference fits between smooth cylindrical surfaces. Experimental included traction tests to determine the coefficients of friction of rotor alloys at various temperatures, bending-mode-vibration tests of shafts equipped with various joints and rotordynamic tests of shafts with axial-spline and interference-fit joints.

  3. The estimation of branching curves in the presence of subject-specific random effects.

    PubMed

    Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng

    2014-12-20

    Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.

  4. B-spline parameterization of the dielectric function and information criteria: the craft of non-overfitting

    NASA Astrophysics Data System (ADS)

    Likhachev, Dmitriy V.

    2017-06-01

    Johs and Hale developed the Kramers-Kronig consistent B-spline formulation for the dielectric function modeling in spectroscopic ellipsometry data analysis. In this article we use popular Akaike, corrected Akaike and Bayesian Information Criteria (AIC, AICc and BIC, respectively) to determine an optimal number of knots for B-spline model. These criteria allow finding a compromise between under- and overfitting of experimental data since they penalize for increasing number of knots and select representation which achieves the best fit with minimal number of knots. Proposed approach provides objective and practical guidance, as opposite to empirically driven or "gut feeling" decisions, for selecting the right number of knots for B-spline models in spectroscopic ellipsometry. AIC, AICc and BIC selection criteria work remarkably well as we demonstrated in several real-data applications. This approach formalizes selection of the optimal knot number and may be useful in practical perspective of spectroscopic ellipsometry data analysis.

  5. Mammogram registration using the Cauchy-Navier spline

    NASA Astrophysics Data System (ADS)

    Wirth, Michael A.; Choi, Christopher

    2001-07-01

    The process of comparative analysis involves inspecting mammograms for characteristic signs of potential cancer by comparing various analogous mammograms. Factors such as the deformable behavior of the breast, changes in breast positioning, and the amount/geometry of compression may contribute to spatial differences between corresponding structures in corresponding mammograms, thereby significantly complicating comparative analysis. Mammogram registration is a process whereby spatial differences between mammograms can be reduced. Presented in this paper is a nonrigid approach to matching corresponding mammograms based on a physical registration model. Many of the earliest approaches to mammogram registration used spatial transformations which were innately rigid or affine in nature. More recently algorithms have incorporated radial basis functions such as the Thin-Plate Spline to match mammograms. The approach presented here focuses on the use of the Cauchy-Navier Spline, a deformable registration model which offers approximate nonrigid registration. The utility of the Cauchy-Navier Spline is illustrated by matching both temporal and bilateral mammograms.

  6. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  7. Comparison and validation of statistical methods for predicting power outage durations in the event of hurricanes.

    PubMed

    Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M

    2011-12-01

    This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.

  8. How many stakes are required to measure the mass balance of a glacier?

    USGS Publications Warehouse

    Fountain, A.G.; Vecchia, A.

    1999-01-01

    Glacier mass balance is estimated for South Cascade Glacier and Maclure Glacier using a one-dimensional regression of mass balance with altitude as an alternative to the traditional approach of contouring mass balance values. One attractive feature of regression is that it can be applied to sparse data sets where contouring is not possible and can provide an objective error of the resulting estimate. Regression methods yielded mass balance values equivalent to contouring methods. The effect of the number of mass balance measurements on the final value for the glacier showed that sample sizes as small as five stakes provided reasonable estimates, although the error estimates were greater than for larger sample sizes. Different spatial patterns of measurement locations showed no appreciable influence on the final value as long as different surface altitudes were intermittently sampled over the altitude range of the glacier. Two different regression equations were examined, a quadratic, and a piecewise linear spline, and comparison of results showed little sensitivity to the type of equation. These results point to the dominant effect of the gradient of mass balance with altitude of alpine glaciers compared to transverse variations. The number of mass balance measurements required to determine the glacier balance appears to be scale invariant for small glaciers and five to ten stakes are sufficient.

  9. Quantile regression via vector generalized additive models.

    PubMed

    Yee, Thomas W

    2004-07-30

    One of the most popular methods for quantile regression is the LMS method of Cole and Green. The method naturally falls within a penalized likelihood framework, and consequently allows for considerable flexible because all three parameters may be modelled by cubic smoothing splines. The model is also very understandable: for a given value of the covariate, the LMS method applies a Box-Cox transformation to the response in order to transform it to standard normality; to obtain the quantiles, an inverse Box-Cox transformation is applied to the quantiles of the standard normal distribution. The purposes of this article are three-fold. Firstly, LMS quantile regression is presented within the framework of the class of vector generalized additive models. This confers a number of advantages such as a unifying theory and estimation process. Secondly, a new LMS method based on the Yeo-Johnson transformation is proposed, which has the advantage that the response is not restricted to be positive. Lastly, this paper describes a software implementation of three LMS quantile regression methods in the S language. This includes the LMS-Yeo-Johnson method, which is estimated efficiently by a new numerical integration scheme. The LMS-Yeo-Johnson method is illustrated by way of a large cross-sectional data set from a New Zealand working population. Copyright 2004 John Wiley & Sons, Ltd.

  10. Cubic spline anchored grid pattern algorithm for high-resolution detection of subsurface cavities by the IR-CAT method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassab, A.J.; Pollard, J.E.

    An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less

  11. Comparison of Ionospheric Vertical Total Electron Content modelling approaches using spline based representations

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej; Schmidt, Michael; Erdogan, Eren; Goss, Andreas

    2017-04-01

    Since electromagnetic measurements show dispersive characteristics, accurate modelling of the ionospheric electron content plays an important role for positioning and navigation applications to mitigate the effect of the ionospheric disturbances. Knowledge about the ionosphere contributes to a better understanding of space weather events as well as to forecast these events to enable protective measures in advance for electronic systems and satellite missions. In the last decades, advances in satellite technologies, data analysis techniques and models together with a rapidly growing number of analysis centres allow modelling the ionospheric electron content with an unprecedented accuracy in (near) real-time. In this sense, the representation of electron content variations in time and space with spline basis functions has gained practical importance in global and regional ionosphere modelling. This is due to their compact support and their flexibility to handle unevenly distributed observations and data gaps. In this contribution, the performances of two ionosphere models from UWM and DGFI-TUM, which are developed using spline functions are evaluated. The VTEC model of DGFI-TUM is based on tensor products of trigonometric B-spline functions in longitude and polynomial B-spline functions in latitude for a global representation. The UWM model uses two dimensional planar thin plate spline (TPS) with the Universal Transverse Mercator representation of ellipsoidal coordinates. In order to provide a smooth VTEC model, the TPS minimizes both, the squared norm of the Hessian matrix and deviations between data points and the model. In the evaluations, the differenced STEC analysis method and Jason-2 altimetry comparisons are applied.

  12. A time-efficient algorithm for implementing the Catmull-Clark subdivision method

    NASA Astrophysics Data System (ADS)

    Ioannou, G.; Savva, A.; Stylianou, V.

    2015-10-01

    Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.

  13. Multiscale analysis of neural spike trains.

    PubMed

    Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin

    2014-01-30

    This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.

  14. S-Nitrosylation Proteome Profile of Peripheral Blood Mononuclear Cells in Human Heart Failure

    PubMed Central

    Spratt, Heidi M.; Gupta, Shivali; Petersen, John R.; Kuyumcu-Martinez, Muge N.

    2016-01-01

    Nitric oxide (NO) protects the heart against ischemic injury; however, NO- and superoxide-dependent S-nitrosylation (S-NO) of cysteines can affect function of target proteins and play a role in disease outcome. We employed 2D-GE with thiol-labeling FL-maleimide dye and MALDI-TOF MS/MS to capture the quantitative changes in abundance and S-NO proteome of HF patients (versus healthy controls, n = 30/group). We identified 93 differentially abundant (59-increased/34-decreased) and 111 S-NO-modified (63-increased/48-decreased) protein spots, respectively, in HF subjects (versus controls, fold-change | ≥1.5|, p ≤ 0.05). Ingenuity pathway analysis of proteome datasets suggested that the pathways involved in phagocytes' migration, free radical production, and cell death were activated and fatty acid metabolism was decreased in HF subjects. Multivariate adaptive regression splines modeling of datasets identified a panel of proteins that will provide >90% prediction success in classifying HF subjects. Proteomic profiling identified ATP-synthase, thrombospondin-1 (THBS1), and vinculin (VCL) as top differentially abundant and S-NO-modified proteins, and these proteins were verified by Western blotting and ELISA in different set of HF subjects. We conclude that differential abundance and S-NO modification of proteins serve as a mechanism in regulating cell viability and free radical production, and THBS1 and VCL evaluation will potentially be useful in the prediction of heart failure. PMID:27635260

  15. Data-driven fuel consumption estimation: A multivariate adaptive regression spline approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuche; Zhu, Lei; Gonder, Jeffrey

    Providing guidance and information to drivers to help them make fuel-efficient route choices remains an important and effective strategy in the near term to reduce fuel consumption from the transportation sector. One key component in implementing this strategy is a fuel-consumption estimation model. In this paper, we developed a mesoscopic fuel consumption estimation model that can be implemented into an eco-routing system. Our proposed model presents a framework that utilizes large-scale, real-world driving data, clusters road links by free-flow speed and fits one statistical model for each of cluster. This model includes predicting variables that were rarely or never consideredmore » before, such as free-flow speed and number of lanes. We applied the model to a real-world driving data set based on a global positioning system travel survey in the Philadelphia-Camden-Trenton metropolitan area. Results from the statistical analyses indicate that the independent variables we chose influence the fuel consumption rates of vehicles. But the magnitude and direction of the influences are dependent on the type of road links, specifically free-flow speeds of links. Here, a statistical diagnostic is conducted to ensure the validity of the models and results. Although the real-world driving data we used to develop statistical relationships are specific to one region, the framework we developed can be easily adjusted and used to explore the fuel consumption relationship in other regions.« less

  16. Data-driven fuel consumption estimation: A multivariate adaptive regression spline approach

    DOE PAGES

    Chen, Yuche; Zhu, Lei; Gonder, Jeffrey; ...

    2017-08-12

    Providing guidance and information to drivers to help them make fuel-efficient route choices remains an important and effective strategy in the near term to reduce fuel consumption from the transportation sector. One key component in implementing this strategy is a fuel-consumption estimation model. In this paper, we developed a mesoscopic fuel consumption estimation model that can be implemented into an eco-routing system. Our proposed model presents a framework that utilizes large-scale, real-world driving data, clusters road links by free-flow speed and fits one statistical model for each of cluster. This model includes predicting variables that were rarely or never consideredmore » before, such as free-flow speed and number of lanes. We applied the model to a real-world driving data set based on a global positioning system travel survey in the Philadelphia-Camden-Trenton metropolitan area. Results from the statistical analyses indicate that the independent variables we chose influence the fuel consumption rates of vehicles. But the magnitude and direction of the influences are dependent on the type of road links, specifically free-flow speeds of links. Here, a statistical diagnostic is conducted to ensure the validity of the models and results. Although the real-world driving data we used to develop statistical relationships are specific to one region, the framework we developed can be easily adjusted and used to explore the fuel consumption relationship in other regions.« less

  17. Integration of data mining classification techniques and ensemble learning to identify risk factors and diagnose ovarian cancer recurrence.

    PubMed

    Tseng, Chih-Jen; Lu, Chi-Jie; Chang, Chi-Chang; Chen, Gin-Den; Cheewakriangkrai, Chalong

    2017-05-01

    Ovarian cancer is the second leading cause of deaths among gynecologic cancers in the world. Approximately 90% of women with ovarian cancer reported having symptoms long before a diagnosis was made. Literature shows that recurrence should be predicted with regard to their personal risk factors and the clinical symptoms of this devastating cancer. In this study, ensemble learning and five data mining approaches, including support vector machine (SVM), C5.0, extreme learning machine (ELM), multivariate adaptive regression splines (MARS), and random forest (RF), were integrated to rank the importance of risk factors and diagnose the recurrence of ovarian cancer. The medical records and pathologic status were extracted from the Chung Shan Medical University Hospital Tumor Registry. Experimental results illustrated that the integrated C5.0 model is a superior approach in predicting the recurrence of ovarian cancer. Moreover, the classification accuracies of C5.0, ELM, MARS, RF, and SVM indeed increased after using the selected important risk factors as predictors. Our findings suggest that The International Federation of Gynecology and Obstetrics (FIGO), Pathologic M, Age, and Pathologic T were the four most critical risk factors for ovarian cancer recurrence. In summary, the above information can support the important influence of personality and clinical symptom representations on all phases of guide interventions, with the complexities of multiple symptoms associated with ovarian cancer in all phases of the recurrent trajectory. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Determinants of gait stability while walking on a treadmill: A machine learning approach.

    PubMed

    Reynard, Fabienne; Terrier, Philippe

    2017-12-08

    Dynamic balance in human locomotion can be assessed through the local dynamic stability (LDS) method. Whereas gait LDS has been used successfully in many settings and applications, little is known about its sensitivity to individual characteristics of healthy adults. Therefore, we reanalyzed a large dataset of accelerometric data measured for 100 healthy adults from 20 to 70 years of age performing 10 min treadmill walking. We sought to assess the extent to which the variations of age, body mass and height, sex, and preferred walking speed (PWS) could influence gait LDS. The random forest (RF) and multiple adaptive regression splines (MARS) algorithms were selected for their good bias-variance tradeoff and their capabilities to handle nonlinear associations. First, through variable importance measure (VIM), we used RF to evaluate which individual characteristics had the highest influence on gait LDS. Second, we used MARS to detect potential interactions among individual characteristics that may influence LDS. The VIM and MARS results indicated that PWS and age correlated with LDS, whereas no associations were found for sex, body height, and body mass. Further, the MARS model detected an age by PWS interaction: on one hand, at high PWS, gait stability is constant across age while, on the other hand, at low PWS, gait instability increases substantially with age. We conclude that it is advisable to consider the participants' age as well as their PWS to avoid potential biases in evaluating dynamic balance through LDS. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. System Identification for Nonlinear Control Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Linse, Dennis J.

    1990-01-01

    An approach to incorporating artificial neural networks in nonlinear, adaptive control systems is described. The controller contains three principal elements: a nonlinear inverse dynamic control law whose coefficients depend on a comprehensive model of the plant, a neural network that models system dynamics, and a state estimator whose outputs drive the control law and train the neural network. Attention is focused on the system identification task, which combines an extended Kalman filter with generalized spline function approximation. Continual learning is possible during normal operation, without taking the system off line for specialized training. Nonlinear inverse dynamic control requires smooth derivatives as well as function estimates, imposing stringent goals on the approximating technique.

  20. Numeric model to predict the location of market demand and economic order quantity for retailers of supply chain

    NASA Astrophysics Data System (ADS)

    Fradinata, Edy; Marli Kesuma, Zurnila

    2018-05-01

    Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.

  1. Radial Splines Would Prevent Rotation Of Bearing Race

    NASA Technical Reports Server (NTRS)

    Kaplan, Ronald M.; Chokshi, Jaisukhlal V.

    1993-01-01

    Interlocking fine-pitch ribs and grooves formed on otherwise flat mating end faces of housing and outer race of rolling-element bearing to be mounted in housing, according to proposal. Splines bear large torque loads and impose minimal distortion on raceway.

  2. The computation of Laplacian smoothing splines with examples

    NASA Technical Reports Server (NTRS)

    Wendelberger, J. G.

    1982-01-01

    Laplacian smoothing splines (LSS) are presented as generalizations of graduation, cubic and thin plate splines. The method of generalized cross validation (GCV) to choose the smoothing parameter is described. The GCV is used in the algorithm for the computation of LSS's. An outline of a computer program which implements this algorithm is presented along with a description of the use of the program. Examples in one, two and three dimensions demonstrate how to obtain estimates of function values with confidence intervals and estimates of first and second derivatives. Probability plots are used as a diagnostic tool to check for model inadequacy.

  3. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  4. Weighted cubic and biharmonic splines

    NASA Astrophysics Data System (ADS)

    Kvasov, Boris; Kim, Tae-Wan

    2017-01-01

    In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.

  5. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  6. An accurate, compact and computationally efficient representation of orbitals for quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Luo, Ye; Esler, Kenneth; Kent, Paul; Shulenburger, Luke

    Quantum Monte Carlo (QMC) calculations of giant molecules, surface and defect properties of solids have been feasible recently due to drastically expanding computational resources. However, with the most computationally efficient basis set, B-splines, these calculations are severely restricted by the memory capacity of compute nodes. The B-spline coefficients are shared on a node but not distributed among nodes, to ensure fast evaluation. A hybrid representation which incorporates atomic orbitals near the ions and B-spline ones in the interstitial regions offers a more accurate and less memory demanding description of the orbitals because they are naturally more atomic like near ions and much smoother in between, thus allowing coarser B-spline grids. We will demonstrate the advantage of hybrid representation over pure B-spline and Gaussian basis sets and also show significant speed-up like computing the non-local pseudopotentials with our new scheme. Moreover, we discuss a new algorithm for atomic orbital initialization which used to require an extra workflow step taking a few days. With this work, the highly efficient hybrid representation paves the way to simulate large size even in-homogeneous systems using QMC. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Computational Materials Sciences Program.

  7. Multivariate meta-analysis for non-linear and other multi-parameter associations

    PubMed Central

    Gasparrini, A; Armstrong, B; Kenward, M G

    2012-01-01

    In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043

  8. Not to put too fine a point on it - does increasing precision of geographic referencing improve species distribution models for a wide-ranging migratory bat?

    USGS Publications Warehouse

    Hayes, Mark A.; Ozenberger, Katharine; Cryan, Paul M.; Wunder, Michael B.

    2015-01-01

    Bat specimens held in natural history museum collections can provide insights into the distribution of species. However, there are several important sources of spatial error associated with natural history specimens that may influence the analysis and mapping of bat species distributions. We analyzed the importance of geographic referencing and error correction in species distribution modeling (SDM) using occurrence records of hoary bats (Lasiurus cinereus). This species is known to migrate long distances and is a species of increasing concern due to fatalities documented at wind energy facilities in North America. We used 3,215 museum occurrence records collected from 1950–2000 for hoary bats in North America. We compared SDM performance using five approaches: generalized linear models, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy models. We evaluated results using three SDM performance metrics (AUC, sensitivity, and specificity) and two data sets: one comprised of the original occurrence data, and a second data set consisting of these same records after the locations were adjusted to correct for identifiable spatial errors. The increase in precision improved the mean estimated spatial error associated with hoary bat records from 5.11 km to 1.58 km, and this reduction in error resulted in a slight increase in all three SDM performance metrics. These results provide insights into the importance of geographic referencing and the value of correcting spatial errors in modeling the distribution of a wide-ranging bat species. We conclude that the considerable time and effort invested in carefully increasing the precision of the occurrence locations in this data set was not worth the marginal gains in improved SDM performance, and it seems likely that gains would be similar for other bat species that range across large areas of the continent, migrate, and are habitat generalists.

  9. Coupling GIS spatial analysis and Ensemble Niche Modelling to investigate climate change-related threats to the Sicilian pond turtle Emys trinacris, an endangered species from the Mediterranean.

    PubMed

    Iannella, Mattia; Cerasoli, Francesco; D'Alessandro, Paola; Console, Giulia; Biondi, Maurizio

    2018-01-01

    The pond turtle Emys trinacris is an endangered endemic species of Sicily showing a fragmented distribution throughout the main island. In this study, we applied "Ensemble Niche Modelling", combining more classical statistical techniques as Generalized Linear Models and Multivariate Adaptive Regression Splines with machine-learning approaches as Boosted Regression Trees and Maxent, to model the potential distribution of the species under current and future climatic conditions. Moreover, a "gap analysis" performed on both the species' presence sites and the predictions from the Ensemble Models is proposed to integrate outputs from these models, in order to assess the conservation status of this threatened species in the context of biodiversity management. For this aim, four "Representative Concentration Pathways", corresponding to different greenhouse gases emissions trajectories were considered to project the obtained models to both 2050 and 2070. Areas lost, gained or remaining stable for the target species in the projected models were calculated. E. trinacris ' potential distribution resulted to be significantly dependent upon precipitation-linked variables, mainly precipitation of wettest and coldest quarter. Future negative effects for the conservation of this species, because of more unstable precipitation patterns and extreme meteorological events, emerged from our analyses. Further, the sites currently inhabited by E. trinacris are, for more than a half, out of the Protected Areas network, highlighting an inadequate management of the species by the authorities responsible for its protection. Our results, therefore, suggest that in the next future the Sicilian pond turtle will need the utmost attention by the scientific community to avoid the imminent risk of extinction. Finally, the gap analysis performed in GIS environment resulted to be a very informative post-modeling technique, potentially applicable to the management of species at risk and to Protected Areas' planning in many contexts.

  10. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  11. Random regression models using different functions to model milk flow in dairy cows.

    PubMed

    Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Tonhati, H; Albuquerque, L G

    2014-09-12

    We analyzed 75,555 test-day milk flow records from 2175 primiparous Holstein cows that calved between 1997 and 2005. Milk flow was obtained by dividing the mean milk yield (kg) of the 3 daily milking by the total milking time (min) and was expressed as kg/min. Milk flow was grouped into 43 weekly classes. The analyses were performed using a single-trait Random Regression Models that included direct additive genetic, permanent environmental, and residual random effects. In addition, the contemporary group and linear and quadratic effects of cow age at calving were included as fixed effects. Fourth-order orthogonal Legendre polynomial of days in milk was used to model the mean trend in milk flow. The additive genetic and permanent environmental covariance functions were estimated using random regression Legendre polynomials and B-spline functions of days in milk. The model using a third-order Legendre polynomial for additive genetic effects and a sixth-order polynomial for permanent environmental effects, which contained 7 residual classes, proved to be the most adequate to describe variations in milk flow, and was also the most parsimonious. The heritability in milk flow estimated by the most parsimonious model was of moderate to high magnitude.

  12. Visual abilities distinguish pitchers from hitters in professional baseball.

    PubMed

    Klemish, David; Ramger, Benjamin; Vittetoe, Kelly; Reiter, Jerome P; Tokdar, Surya T; Appelbaum, Lawrence Gregory

    2018-01-01

    This study aimed to evaluate the possibility that differences in sensorimotor abilities exist between hitters and pitchers in a large cohort of baseball players of varying levels of experience. Secondary data analysis was performed on 9 sensorimotor tasks comprising the Nike Sensory Station assessment battery. Bayesian hierarchical regression modelling was applied to test for differences between pitchers and hitters in data from 566 baseball players (112 high school, 85 college, 369 professional) collected at 20 testing centres. Explanatory variables including height, handedness, eye dominance, concussion history, and player position were modelled along with age curves using basis regression splines. Regression analyses revealed better performance for hitters relative to pitchers at the professional level in the visual clarity and depth perception tasks, but these differences did not exist at the high school or college levels. No significant differences were observed in the other 7 measures of sensorimotor capabilities included in the test battery, and no systematic biases were found between the testing centres. These findings, indicating that professional-level hitters have better visual acuity and depth perception than professional-level pitchers, affirm the notion that highly experienced athletes have differing perceptual skills. Findings are discussed in relation to deliberate practice theory.

  13. Seroprevalence of HCV and HIV infection among clients of the nation's longest-standing statewide syringe exchange program: A cross-sectional study of Community Health Outreach Work to Prevent AIDS (CHOW).

    PubMed

    Salek, Thomas P; Katz, Alan R; Lenze, Stacy M; Lusk, Heather M; Li, Dongmei; Des Jarlais, Don C

    2017-10-01

    The Community Health Outreach Work to Prevent AIDS (CHOW) Project is the first and longest-standing statewide integrated and funded needle and syringe exchange program (SEP) in the US. Initiated on O'ahu in 1990, CHOW expanded statewide in 1993. The purpose of this study is to estimate the prevalences of hepatitis C virus (HCV) and human immunodeficiency virus (HIV) infection, and to characterize risk behaviors associated with infection among clients of a long-standing SEP through the analysis of the 2012 CHOW evaluation data. A cross-sectional sample of 130 CHOW Project clients was selected from January 1, 2012 through December 31, 2012. Questionnaires captured self-reported exposure information. HIV and HCV antibodies were detected via rapid, point-of-care FDA-approved tests. Log-binomial regressions were used to estimate prevalence proportion ratios (PPRs). A piecewise linear log-binomial regression model containing 1 spline knot was used to fit the age-HCV relationship. The estimated seroprevalence of HCV was 67.7% (95% confidence interval [CI]=59.5-75.8%). HIV seroprevalence was 2.3% (95% CI=0-4.9%). Anti-HCV prevalence demonstrated age-specific patterns, ranging from 31.6% through 90.9% in people who inject drugs (PWID) <30 to ≥60 years respectively. Age (continuous/year) prior to spline knot at 51.5 years (adjusted PPR [APPR]=1.03; 95% CI=1.02-1.05) and months exchanging syringes (quartiles) (APPR=1.92; 95% CI=1.3-3.29) were independently associated with anti-HCV prevalence. In Hawai'i, HCV prevalence among PWID is hyperendemic demonstrating age- and SEP duration-specific trends. Relatively low HIV prevalence compared with HCV prevalence reflects differences in transmissibility of these 2 blood-borne pathogens and suggests much greater efficacy of SEP for HIV prevention. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Comparison of co-expression measures: mutual information, correlation, and model based indices.

    PubMed

    Song, Lin; Langfelder, Peter; Horvath, Steve

    2012-12-09

    Co-expression measures are often used to define networks among genes. Mutual information (MI) is often used as a generalized correlation measure. It is not clear how much MI adds beyond standard (robust) correlation measures or regression model based association measures. Further, it is important to assess what transformations of these and other co-expression measures lead to biologically meaningful modules (clusters of genes). We provide a comprehensive comparison between mutual information and several correlation measures in 8 empirical data sets and in simulations. We also study different approaches for transforming an adjacency matrix, e.g. using the topological overlap measure. Overall, we confirm close relationships between MI and correlation in all data sets which reflects the fact that most gene pairs satisfy linear or monotonic relationships. We discuss rare situations when the two measures disagree. We also compare correlation and MI based approaches when it comes to defining co-expression network modules. We show that a robust measure of correlation (the biweight midcorrelation transformed via the topological overlap transformation) leads to modules that are superior to MI based modules and maximal information coefficient (MIC) based modules in terms of gene ontology enrichment. We present a function that relates correlation to mutual information which can be used to approximate the mutual information from the corresponding correlation coefficient. We propose the use of polynomial or spline regression models as an alternative to MI for capturing non-linear relationships between quantitative variables. The biweight midcorrelation outperforms MI in terms of elucidating gene pairwise relationships. Coupled with the topological overlap matrix transformation, it often leads to more significantly enriched co-expression modules. Spline and polynomial networks form attractive alternatives to MI in case of non-linear relationships. Our results indicate that MI networks can safely be replaced by correlation networks when it comes to measuring co-expression relationships in stationary data.

  15. Optimal cut-off points for waist circumference in the definition of metabolic syndrome in Brazilian adults: baseline analyses of the Longitudinal Study of Adult Health (ELSA-Brasil).

    PubMed

    Cardinal, Thiane Ristow; Vigo, Alvaro; Duncan, Bruce Bartholow; Matos, Sheila Maria Alvim; da Fonseca, Maria de Jesus Mendes; Barreto, Sandhi Maria; Schmidt, Maria Inês

    2018-01-01

    Waist circumference (WC) has been incorporated in the definition of the metabolic syndrome (MetS) but the exact WC cut-off points across populations are not clear. The Joint Interim Statement (JIS) suggested possible cut-offs to different populations and ethnic groups. However, the adequacy of these cut-offs to Brazilian adults has been scarcely investigated. The objective of the study is to evaluate possible WC thresholds to be used in the definition of MetS using data from the Longitudinal Study of Adult Health (ELSA-Brasil), a multicenter cohort study of civil servants (35-74 years old) of six Brazilian cities. We analyzed baseline data from 14,893 participants (6772 men and 8121 women). A MetS was defined according to the JIS criteria, but excluding WC and thus requiring 2 of the 4 remaining elements. We used restricted cubic spline regression to graph the relationship between WC and MetS. We identified optimal cut-off points which maximized joint sensitivity and specificity (Youden's index) from Receiver Operator Characteristic Curves. We also estimated the C-statistics using logistic regression. We found no apparent threshold for WC in restricted cubic spline plots. Optimal cut-off for men was 92 cm (2 cm lower than that recommended by JIS for Caucasian/Europids or Sub-Saharan African men), but 2 cm higher than that recommended for ethnic Central and South American. For women, optimal cut-off was 86, 6 cm higher than that recommended for Caucasian/Europids and ethnic Central and South American. Optimal cut-offs did not very across age groups and most common race/color categories (except for Asian men, 87 cm). Sex-specific cut-offs for WC recommended by JIS differ from optimal cut-offs we found for adult men and women of Brazil´s most common ethnic groups.

  16. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  17. CAGI: Computer Aided Grid Interface. A work in progress

    NASA Technical Reports Server (NTRS)

    Soni, Bharat K.; Yu, Tzu-Yi; Vaughn, David

    1992-01-01

    Progress realized in the development of a Computer Aided Grid Interface (CAGI) software system in integrating CAD/CAM geometric system output and/or Interactive Graphics Exchange Standard (IGES) files, geometry manipulations associated with grid generation, and robust grid generation methodologies is presented. CAGI is being developed in a modular fashion and will offer fast, efficient and economical response to geometry/grid preparation, allowing the ability to upgrade basic geometry in a step-by-step fashion interactively and under permanent visual control along with minimizing the differences between the actual hardware surface descriptions and corresponding numerical analog. The computer code GENIE is used as a basis. The Non-Uniform Rational B-Splines (NURBS) representation of sculptured surfaces is utilized for surface grid redistribution. The computer aided analysis system, PATRAN, is adapted as a CAD/CAM system. The progress realized in NURBS surface grid generation, the development of IGES transformer, and geometry adaption using PATRAN will be presented along with their applicability to grid generation associated with rocket propulsion applications.

  18. Planar torsion spring

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Parsons, Adam H. (Inventor); Mehling, Joshua S. (Inventor); Griffith, Bryan Kristian (Inventor)

    2012-01-01

    A torsion spring comprises an inner mounting segment. An outer mounting segment is located concentrically around the inner mounting segment. A plurality of splines extends from the inner mounting segment to the outer mounting segment. At least a portion of each spline extends generally annularly around the inner mounting segment.

  19. Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.

  20. Effect of coulomb spline on rotor dynamic response

    NASA Technical Reports Server (NTRS)

    Nataraj, C.; Nelson, H. D.; Arakere, N.

    1985-01-01

    A rigid rotor system coupled by a coulomb spline is modelled and analyzed by approximate analytical and numerical analytical methods. Expressions are derived for the variables of the resulting limit cycle and are shown to be quite accurate for a small departure from isotropy.

  1. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads

    2015-03-01

    Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).

  2. Time Varying Compensator Design for Reconfigurable Structures Using Non-Collocated Feedback

    NASA Technical Reports Server (NTRS)

    Scott, Michael A.

    1996-01-01

    Analysis and synthesis tools are developed to improved the dynamic performance of reconfigurable nonminimum phase, nonstrictly positive real-time variant systems. A novel Spline Varying Optimal (SVO) controller is developed for the kinematic nonlinear system. There are several advantages to using the SVO controller, in which the spline function approximates the system model, observer, and controller gain. They are: The spline function approximation is simply connected, thus the SVO controller is more continuous than traditional gain scheduled controllers when implemented on a time varying plant; ft is easier for real-time implementations in storage and computational effort; where system identification is required, the spline function requires fewer experiments, namely four experiments; and initial startup estimator transients are eliminated. The SVO compensator was evaluated on a high fidelity simulation of the Shuttle Remote Manipulator System. The SVO controller demonstrated significant improvement over the present arm performance: (1) Damping level was improved by a factor of 3; and (2) Peak joint torque was reduced by a factor of 2 following Shuttle thruster firings.

  3. Immersogeometric cardiovascular fluid–structure interaction analysis with divergence-conforming B-splines

    PubMed Central

    Kamensky, David; Hsu, Ming-Chen; Yu, Yue; Evans, John A.; Sacks, Michael S.; Hughes, Thomas J. R.

    2016-01-01

    This paper uses a divergence-conforming B-spline fluid discretization to address the long-standing issue of poor mass conservation in immersed methods for computational fluid–structure interaction (FSI) that represent the influence of the structure as a forcing term in the fluid subproblem. We focus, in particular, on the immersogeometric method developed in our earlier work, analyze its convergence for linear model problems, then apply it to FSI analysis of heart valves, using divergence-conforming B-splines to discretize the fluid subproblem. Poor mass conservation can manifest as effective leakage of fluid through thin solid barriers. This leakage disrupts the qualitative behavior of FSI systems such as heart valves, which exist specifically to block flow. Divergence-conforming discretizations can enforce mass conservation exactly, avoiding this problem. To demonstrate the practical utility of immersogeometric FSI analysis with divergence-conforming B-splines, we use the methods described in this paper to construct and evaluate a computational model of an in vitro experiment that pumps water through an artificial valve. PMID:28239201

  4. Landmark-based elastic registration using approximating thin-plate splines.

    PubMed

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  5. Non-rigid image registration using a statistical spline deformation model.

    PubMed

    Loeckx, Dirk; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul

    2003-07-01

    We propose a statistical spline deformation model (SSDM) as a method to solve non-rigid image registration. Within this model, the deformation is expressed using a statistically trained B-spline deformation mesh. The model is trained by principal component analysis of a training set. This approach allows to reduce the number of degrees of freedom needed for non-rigid registration by only retaining the most significant modes of variation observed in the training set. User-defined transformation components, like affine modes, are merged with the principal components into a unified framework. Optimization proceeds along the transformation components rather then along the individual spline coefficients. The concept of SSDM's is applied to the temporal registration of thorax CR-images using pattern intensity as the registration measure. Our results show that, using 30 training pairs, a reduction of 33% is possible in the number of degrees of freedom without deterioration of the result. The same accuracy as without SSDM's is still achieved after a reduction up to 66% of the degrees of freedom.

  6. Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines

    NASA Astrophysics Data System (ADS)

    Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya

    2017-11-01

    Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1.093 pixels, which were respectively 42.1%, 48.3%, and 54.8% better than those achieved before the nonlinear bias compensation.

  7. Adaptive finite element modelling of three-dimensional magnetotelluric fields in general anisotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Xu, Zhenhuan; Li, Yuguo

    2018-04-01

    We present a goal-oriented adaptive finite element (FE) modelling algorithm for 3-D magnetotelluric fields in generally anisotropic conductivity media. The model consists of a background layered structure, containing anisotropic blocks. Each block and layer might be anisotropic by assigning to them 3 × 3 conductivity tensors. The second-order partial differential equations are solved using the adaptive finite element method (FEM). The computational domain is subdivided into unstructured tetrahedral elements, which allow for complex geometries including bathymetry and dipping interfaces. The grid refinement process is guided by a global posteriori error estimator and is performed iteratively. The system of linear FE equations for electric field E is solved with a direct solver MUMPS. Then the magnetic field H can be found, in which the required derivatives are computed numerically using cubic spline interpolation. The 3-D FE algorithm has been validated by comparisons with both the 3-D finite-difference solution and 2-D FE results. Two model types are used to demonstrate the effects of anisotropy upon 3-D magnetotelluric responses: horizontal and dipping anisotropy. Finally, a 3D sea hill model is modelled to study the effect of oblique interfaces and the dipping anisotropy.

  8. Spline-Screw Payload-Fastening System

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Payload handed off securely between robot and vehicle or structure. Spline-screw payload-fastening system includes mating female and male connector mechanisms. Clockwise (or counter-clockwise) rotation of splined male driver on robotic end effector causes connection between robot and payload to tighten (or loosen) and simultaneously causes connection between payload and structure to loosen (or tighten). Includes mechanisms like those described in "Tool-Changing Mechanism for Robot" (GSC-13435) and "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430). Designed for use in outer space, also useful on Earth in applications needed for secure handling and secure mounting of equipment modules during storage, transport, and/or operation. Particularly useful in machine or robotic applications.

  9. Numerical computations on one-dimensional inverse scattering problems

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Hariharan, S. I.

    1983-01-01

    An approximate method to determine the index of refraction of a dielectric obstacle is presented. For simplicity one dimensional models of electromagnetic scattering are treated. The governing equations yield a second order boundary value problem, in which the index of refraction appears as a functional parameter. The availability of reflection coefficients yield two additional boundary conditions. The index of refraction by a k-th order spline which can be written as a linear combination of B-splines is approximated. For N distinct reflection coefficients, the resulting N boundary value problems yield a system of N nonlinear equations in N unknowns which are the coefficients of the B-splines.

  10. How to fly an aircraft with control theory and splines

    NASA Technical Reports Server (NTRS)

    Karlsson, Anders

    1994-01-01

    When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.

  11. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  12. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  13. Interpolating Spherical Harmonics for Computing Antenna Patterns

    DTIC Science & Technology

    2011-07-01

    4∞. If gNF denotes the spline computed from the uniform partition of NF + 1 frequency points, the splines converge as O[N−4F ]: ‖gN − g‖∞ ≤ C0‖g(4...splines. There is the possibility of estimating the error ‖g− gNF ‖∞ even though the function g is unknown. Table 1 compares these unknown errors ‖g − gNF ...to the computable estimates ‖ gNF − g2NF ‖∞. The latter is a strong predictor of the unknown error. The triple bar is the sup-norm error over all the

  14. Radial Basis Function Based Quadrature over Smooth Surfaces

    DTIC Science & Technology

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  15. Direct numerical simulation of incompressible axisymmetric flows

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick

    1994-01-01

    In the present work, we propose to conduct direct numerical simulations (DNS) of incompressible turbulent axisymmetric jets and wakes. The objectives of the study are to understand the fundamental behavior of axisymmetric jets and wakes, which are perhaps the most technologically relevant free shear flows (e.g. combuster injectors, propulsion jet). Among the data to be generated are various statistical quantities of importance in turbulence modeling, like the mean velocity, turbulent stresses, and all the terms in the Reynolds-stress balance equations. In addition, we will be interested in the evolution of large-scale structures that are common in free shear flow. The axisymmetric jet or wake is also a good problem in which to try the newly developed b-spline numerical method. Using b-splines as interpolating functions in the non-periodic direction offers many advantages. B-splines have local support, which leads to sparse matrices that can be efficiently stored and solved. Also, they offer spectral-like accuracy that are C(exp O-1) continuous, where O is the order of the spline used; this means that derivatives of the velocity such as the vorticity are smoothly and accurately represented. For purposes of validation against existing results, the present code will also be able to simulate internal flows (ones that require a no-slip boundary condition). Implementation of no-slip boundary condition is trivial in the context of the b-splines.

  16. Directly manipulated free-form deformation image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B; Gee, James C

    2009-03-01

    Previous contributions to both the research and open source software communities detailed a generalization of a fast scalar field fitting technique for cubic B-splines based on the work originally proposed by Lee . One advantage of our proposed generalized B-spline fitting approach is its immediate application to a class of nonrigid registration techniques frequently employed in medical image analysis. Specifically, these registration techniques fall under the rubric of free-form deformation (FFD) approaches in which the object to be registered is embedded within a B-spline object. The deformation of the B-spline object describes the transformation of the image registration solution. Representative of this class of techniques, and often cited within the relevant community, is the formulation of Rueckert who employed cubic splines with normalized mutual information to study breast deformation. Similar techniques from various groups provided incremental novelty in the form of disparate explicit regularization terms, as well as the employment of various image metrics and tailored optimization methods. For several algorithms, the underlying gradient-based optimization retained the essential characteristics of Rueckert's original contribution. The contribution which we provide in this paper is two-fold: 1) the observation that the generic FFD framework is intrinsically susceptible to problematic energy topographies and 2) that the standard gradient used in FFD image registration can be modified to a well-understood preconditioned form which substantially improves performance. This is demonstrated with theoretical discussion and comparative evaluation experimentation.

  17. Square tubing reduces cost of telescoping bridge crane hoist

    NASA Technical Reports Server (NTRS)

    Bernstein, G.; Graae, J.; Schraidt, J.

    1967-01-01

    Using standard square tubing in a telescoping arrangement reduces the cost of a bridge crane hoist. Because surface tolerances of square tubing need not be as accurate as the tubing used previously and because no spline is necessary, the square tubing is significantly less expensive than splined telescoping tubes.

  18. A New Multifunctional Sensor for Measuring Concentrations of Ternary Solution

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Shida, Katsunori

    This paper presents a multifunctional sensor with novel structure, which is capable of directly sensing temperature and two physical parameters of solutions, namely ultrasonic velocity and conductivity. By combined measurement of these three measurable parameters, the concentrations of various components in a ternary solution can be simultaneously determined. The structure and operation principle of the sensor are described, and a regression algorithm based on natural cubic spline interpolation and the least square method is adopted to estimate the concentrations. The performances of the proposed sensor are experimentally tested by the use of ternary aqueous solution of sodium chloride and sucrose, which is widely involved in food and beverage industries. This sensor could prove valuable as a process control sensor in industry fields.

  19. Functional linear models to test for differences in prairie wetland hydraulic gradients

    USGS Publications Warehouse

    Greenwood, Mark C.; Sojda, Richard S.; Preston, Todd M.; Swayne, David A.; Yang, Wanhong; Voinov, A.A.; Rizzoli, A.; Filatova, T.

    2010-01-01

    Functional data analysis provides a framework for analyzing multiple time series measured frequently in time, treating each series as a continuous function of time. Functional linear models are used to test for effects on hydraulic gradient functional responses collected from three types of land use in Northeastern Montana at fourteen locations. Penalized regression-splines are used to estimate the underlying continuous functions based on the discretely recorded (over time) gradient measurements. Permutation methods are used to assess the statistical significance of effects. A method for accommodating missing observations in each time series is described. Hydraulic gradients may be an initial and fundamental ecosystem process that responds to climate change. We suggest other potential uses of these methods for detecting evidence of climate change.

  20. Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...

  1. Adaptation of a weighted regression approach to evaluate water quality trends in anestuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...

  2. Regression discontinuity was a valid design for dichotomous outcomes in three randomized trials.

    PubMed

    van Leeuwen, Nikki; Lingsma, Hester F; Mooijaart, Simon P; Nieboer, Daan; Trompet, Stella; Steyerberg, Ewout W

    2018-06-01

    Regression discontinuity (RD) is a quasi-experimental design that may provide valid estimates of treatment effects in case of continuous outcomes. We aimed to evaluate validity and precision in the RD design for dichotomous outcomes. We performed validation studies in three large randomized controlled trials (RCTs) (Corticosteroid Randomization After Significant Head injury [CRASH], the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries [GUSTO], and PROspective Study of Pravastatin in elderly individuals at risk of vascular disease [PROSPER]). To mimic the RD design, we selected patients above and below a cutoff (e.g., age 75 years) randomized to treatment and control, respectively. Adjusted logistic regression models using restricted cubic splines (RCS) and polynomials and local logistic regression models estimated the odds ratio (OR) for treatment, with 95% confidence intervals (CIs) to indicate precision. In CRASH, treatment increased mortality with OR 1.22 [95% CI 1.06-1.40] in the RCT. The RD estimates were 1.42 (0.94-2.16) and 1.13 (0.90-1.40) with RCS adjustment and local regression, respectively. In GUSTO, treatment reduced mortality (OR 0.83 [0.72-0.95]), with more extreme estimates in the RD analysis (OR 0.57 [0.35; 0.92] and 0.67 [0.51; 0.86]). In PROSPER, similar RCT and RD estimates were found, again with less precision in RD designs. We conclude that the RD design provides similar but substantially less precise treatment effect estimates compared with an RCT, with local regression being the preferred method of analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    PubMed

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  4. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry

    NASA Astrophysics Data System (ADS)

    Bedggood, Phillip; Metha, Andrew

    2010-11-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  5. The construction and assessment of a statistical model for the prediction of protein assay data.

    PubMed

    Pittman, J; Sacks, J; Young, S Stanley

    2002-01-01

    The focus of this work is the development of a statistical model for a bioinformatics database whose distinctive structure makes model assessment an interesting and challenging problem. The key components of the statistical methodology, including a fast approximation to the singular value decomposition and the use of adaptive spline modeling and tree-based methods, are described, and preliminary results are presented. These results are shown to compare favorably to selected results achieved using comparitive methods. An attempt to determine the predictive ability of the model through the use of cross-validation experiments is discussed. In conclusion a synopsis of the results of these experiments and their implications for the analysis of bioinformatic databases in general is presented.

  6. The Design and Characterization of Wideband Spline-profiled Feedhorns for Advanced Actpol

    NASA Technical Reports Server (NTRS)

    Simon, Sara M.; Austermann, Jason; Beall, James A.; Choi, Steve K.; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Henderson, Shawn W.; Hills, Felicity B.; Ho, Shuay-Pwu Patty; hide

    2016-01-01

    Advanced ACTPol (AdvACT) is an upgraded camera for the Atacama Cosmology Telescope (ACT) that will measure the cosmic microwave background in temperature and polarization over a wide range of angular scales and five frequency bands from 28-230 GHz. AdvACT will employ four arrays of feedhorn-coupled, polarization- sensitive multichroic detectors. To accommodate the higher pixel packing densities necessary to achieve Ad- vACTs sensitivity goals, we have developed and optimized wideband spline-profiled feedhorns for the AdvACT multichroic arrays that maximize coupling efficiency while carefully controlling polarization systematics. We present the design, fabrication, and testing of wideband spline-profiled feedhorns for the multichroic arrays of AdvACT.

  7. Historical HIV incidence modelling in regional subgroups: use of flexible discrete models with penalized splines based on prior curves.

    PubMed

    Greenland, S

    1996-03-15

    This paper presents an approach to back-projection (back-calculation) of human immunodeficiency virus (HIV) person-year infection rates in regional subgroups based on combining a log-linear model for subgroup differences with a penalized spline model for trends. The penalized spline approach allows flexible trend estimation but requires far fewer parameters than fully non-parametric smoothers, thus saving parameters that can be used in estimating subgroup effects. Use of reasonable prior curve to construct the penalty function minimizes the degree of smoothing needed beyond model specification. The approach is illustrated in application to acquired immunodeficiency syndrome (AIDS) surveillance data from Los Angeles County.

  8. Analytic regularization of uniform cubic B-spline deformation fields.

    PubMed

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  9. An investigation of angular stiffness and damping coefficients of an axial spline coupling in high-speed rotating machinery

    NASA Technical Reports Server (NTRS)

    Ku, C.-P. Roger; Walton, James F., Jr.; Lund, Jorgen W.

    1994-01-01

    This paper provided an opportunity to quantify the angular stiffness and equivalent viscous damping coefficients of an axial spline coupling used in high-speed turbomachinery. A unique test methodology and data reduction procedures were developed. The bending moments and angular deflections transmitted across an axial spline coupling were measured while a nonrotating shaft was excited by an external shaker. A rotor dynamics computer program was used to simulate the test conditions and to correlate the angular stiffness and damping coefficients. In addition, sensitivity analyses were performed to show that the accuracy of the dynamic coefficients do not rely on the accuracy of the data reduction procedures.

  10. 78 FR 54377 - Airworthiness Directives; Airbus Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... family THSA P/N 47145-XXX (where XXX stands for any numerical value) ballscrews might be affected by this... requires repetitive detailed inspections of the ballscrew lower splines of THSAs having P/N 47145-XXX to... ballscrew shaft and tie-rod splines on any THSA having P/N 47145-XXX (where XXX stands for any numerical...

  11. Computational methods for estimation of parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.; Murphy, K. A.

    1983-01-01

    Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.

  12. Clustered mixed nonhomogeneous Poisson process spline models for the analysis of recurrent event panel data.

    PubMed

    Nielsen, J D; Dean, C B

    2008-09-01

    A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.

  13. Algebraic grid generation using tensor product B-splines. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Saunders, B. V.

    1985-01-01

    Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.

  14. Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain

    PubMed Central

    Osechinskiy, Sergey; Kruggel, Frithjof

    2011-01-01

    Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function. PMID:22567290

  15. Robust Spatial Approximation of Laser Scanner Point Clouds by Means of Free-form Curve Approaches in Deformation Analysis

    NASA Astrophysics Data System (ADS)

    Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo

    2016-03-01

    In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.

  16. Modelling inflation in transportation, comunication and financial services using B-Spline time series model

    NASA Astrophysics Data System (ADS)

    Suparti; Prahutama, Alan; Santoso, Rukun

    2018-05-01

    Inflation is an increase in the price of goods and services in general where the goods and services are the basic needs of society or the decline of the selling power of a country’s currency. Significant inflationary increases occurred in 2013. This increase was contributed by a significant increase in some inflation sectors / groups i.e transportation, communication and financial services; the foodstuff sector, and the housing, water, electricity, gas and fuel sectors. However, significant contributions occurred in the transportation, communications and financial services sectors. In the model of IFIs in the transportation, communication and financial services sector use the B-Spline time series approach, where the predictor variable is Yt, whereas the predictor is a significant lag (in this case Yt-1). In modeling B-spline time series determined the order and the optimum knot point. Optimum knot determination using Generalized Cross Validation (GCV). In inflation modeling for transportation sector, communication and financial services obtained model of B-spline order 2 with 2 points knots produce MAPE less than 50%.

  17. A pseudoinverse deformation vector field generator and its applications

    PubMed Central

    Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.

    2010-01-01

    Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247

  18. Near real-time estimation of ionosphere vertical total electron content from GNSS satellites using B-splines in a Kalman filter

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Schmidt, Michael; Seitz, Florian; Durmaz, Murat

    2017-02-01

    Although the number of terrestrial global navigation satellite system (GNSS) receivers supported by the International GNSS Service (IGS) is rapidly growing, the worldwide rather inhomogeneously distributed observation sites do not allow the generation of high-resolution global ionosphere products. Conversely, with the regionally enormous increase in highly precise GNSS data, the demands on (near) real-time ionosphere products, necessary in many applications such as navigation, are growing very fast. Consequently, many analysis centers accepted the responsibility of generating such products. In this regard, the primary objective of our work is to develop a near real-time processing framework for the estimation of the vertical total electron content (VTEC) of the ionosphere using proper models that are capable of a global representation adapted to the real data distribution. The global VTEC representation developed in this work is based on a series expansion in terms of compactly supported B-spline functions, which allow for an appropriate handling of the heterogeneous data distribution, including data gaps. The corresponding series coefficients and additional parameters such as differential code biases of the GNSS satellites and receivers constitute the set of unknown parameters. The Kalman filter (KF), as a popular recursive estimator, allows processing of the data immediately after acquisition and paves the way of sequential (near) real-time estimation of the unknown parameters. To exploit the advantages of the chosen data representation and the estimation procedure, the B-spline model is incorporated into the KF under the consideration of necessary constraints. Based on a preprocessing strategy, the developed approach utilizes hourly batches of GPS and GLONASS observations provided by the IGS data centers with a latency of 1 h in its current realization. Two methods for validation of the results are performed, namely the self consistency analysis and a comparison with Jason-2 altimetry data. The highly promising validation results allow the conclusion that under the investigated conditions our derived near real-time product is of the same accuracy level as the so-called final post-processed products provided by the IGS with a latency of several days or even weeks.

  19. Controlling for seasonal patterns and time varying confounders in time-series epidemiological models: a simulation study.

    PubMed

    Perrakis, Konstantinos; Gryparis, Alexandros; Schwartz, Joel; Le Tertre, Alain; Katsouyanni, Klea; Forastiere, Francesco; Stafoggia, Massimo; Samoli, Evangelia

    2014-12-10

    An important topic when estimating the effect of air pollutants on human health is choosing the best method to control for seasonal patterns and time varying confounders, such as temperature and humidity. Semi-parametric Poisson time-series models include smooth functions of calendar time and weather effects to control for potential confounders. Case-crossover (CC) approaches are considered efficient alternatives that control seasonal confounding by design and allow inclusion of smooth functions of weather confounders through their equivalent Poisson representations. We evaluate both methodological designs with respect to seasonal control and compare spline-based approaches, using natural splines and penalized splines, and two time-stratified CC approaches. For the spline-based methods, we consider fixed degrees of freedom, minimization of the partial autocorrelation function, and general cross-validation as smoothing criteria. Issues of model misspecification with respect to weather confounding are investigated under simulation scenarios, which allow quantifying omitted, misspecified, and irrelevant-variable bias. The simulations are based on fully parametric mechanisms designed to replicate two datasets with different mortality and atmospheric patterns. Overall, minimum partial autocorrelation function approaches provide more stable results for high mortality counts and strong seasonal trends, whereas natural splines with fixed degrees of freedom perform better for low mortality counts and weak seasonal trends followed by the time-season-stratified CC model, which performs equally well in terms of bias but yields higher standard errors. Copyright © 2014 John Wiley & Sons, Ltd.

  20. A B-spline Galerkin method for the Dirac equation

    NASA Astrophysics Data System (ADS)

    Froese Fischer, Charlotte; Zatsarinny, Oleg

    2009-06-01

    The B-spline Galerkin method is first investigated for the simple eigenvalue problem, y=-λy, that can also be written as a pair of first-order equations y=λz, z=-λy. Expanding both y(r) and z(r) in the B basis results in many spurious solutions such as those observed for the Dirac equation. However, when y(r) is expanded in the B basis and z(r) in the dB/dr basis, solutions of the well-behaved second-order differential equation are obtained. From this analysis, we propose a stable method ( B,B) basis for the Dirac equation and evaluate its accuracy by comparing the computed and exact R-matrix for a wide range of nuclear charges Z and angular quantum numbers κ. When splines of the same order are used, many spurious solutions are found whereas none are found for splines of different order. Excellent agreement is obtained for the R-matrix and energies for bound states for low values of Z. For high Z, accuracy requires the use of a grid with many points near the nucleus. We demonstrate the accuracy of the bound-state wavefunctions by comparing integrals arising in hyperfine interaction matrix elements with exact analytic expressions. We also show that the Thomas-Reiche-Kuhn sum rule is not a good measure of the quality of the solutions obtained by the B-spline Galerkin method whereas the R-matrix is very sensitive to the appearance of pseudo-states.

  1. A restricted cubic spline approach to assess the association between high fat fish intake and red blood cell EPA + DHA content.

    PubMed

    Sirot, V; Dumas, C; Desquilbet, L; Mariotti, F; Legrand, P; Catheline, D; Leblanc, J-C; Margaritis, I

    2012-04-01

    Fish, especially fatty fish, are the main contributor to eicosapentaenoic (EPA) and docosahexaenoic (DHA) intake. EPA and DHA concentrations in red blood cells (RBC) has been proposed as a cardiovascular risk factor, with <4% and >8% associated with the lowest and greatest protection, respectively. The relationship between high fat fish (HFF) intake and RBC EPA + DHA content has been little investigated on a wide range of fish intake, and may be non-linear. We aimed to study the shape of this relationship among high seafood consumers. Seafood consumption records and blood were collected from 384 French heavy seafood consumers and EPA and DHA were measured in RBC. A multivariate linear regression was performed using restricted cubic splines to consider potential non-linear associations. Thirty-six percent of subjects had an RBC EPA + DHA content lower than 4% and only 5% exceeded 8%. HFF consumption was significantly associated with RBC EPA + DHA content (P [overall association] = 0.021) adjusted for sex, tobacco status, study area, socioeconomic status, age, alcohol, other seafood, meat, and meat product intakes. This relationship was non-linear: for intakes higher than 200 g/wk, EPA + DHA content tended to stagnate. Tobacco status and fish contaminants were negatively associated with RBC EPA + DHA content. Because of the saturation for high intakes, and accounting for the concern with exposure to trace element contaminants, intake not exceeding 200 g should be considered. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Kernel PLS Estimation of Single-trial Event-related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  3. Racism in the form of micro aggressions and the risk of preterm birth among Black women

    PubMed Central

    Slaughter-Acey, Jaime C.; Sealy-Jefferson, Shawnita; Helmkamp, Laura; Caldwell, Cleopatra H; Osypuk, Theresa L.; Platt, Robert W.; Straughen, Jennifer K.; Dailey-Okezie, Rhonda K.; Abeysekara, Purni; Misra, Dawn P.

    2015-01-01

    Purpose This study sought to examine whether perceived interpersonal racism in the form of racial micro aggressions was associated with preterm birth (PTB) and whether the presence of depressive symptoms and perceived stress modified the association. Methods Data stem from a cohort of 1410 Black women residing in Metropolitan Detroit, Michigan enrolled into the Life-course Influences on Fetal Environments (LIFE) Study. The Daily Life Experiences of Racism and Bother (DLE-B) scale measured the frequency and perceived stressfulness of racial micro aggressions experienced during the past year. Severe past-week depressive symptomatology was measured by the Centers for Epidemiologic Studies-Depression scale (CES-D) dichotomized at ≥23. Restricted cubic splines were used to model non-linearity between perceived racism and PTB. We used the Perceived Stress Scale (PSS) to assess general stress perceptions. Results Stratified spline regression analysis demonstrated that among those with severe depressive symptoms, perceived racism was not associated with PTB. However, perceived racism was significantly associated with PTB among women with mild to moderate (CES-D score ≤22) depressive symptoms. Perceived racism was not associated with PTB among women with or without high amounts of perceived stress. Conclusions Our findings suggest that racism, at least in the form of racial micro aggressions, may not further impact a group already at high risk for PTB (those with severe depressive symptoms), but may increase the risk of PTB for women at lower baseline risk. PMID:26549132

  4. Racism in the form of micro aggressions and the risk of preterm birth among black women.

    PubMed

    Slaughter-Acey, Jaime C; Sealy-Jefferson, Shawnita; Helmkamp, Laura; Caldwell, Cleopatra H; Osypuk, Theresa L; Platt, Robert W; Straughen, Jennifer K; Dailey-Okezie, Rhonda K; Abeysekara, Purni; Misra, Dawn P

    2016-01-01

    This study sought to examine whether perceived interpersonal racism in the form of racial micro aggressions was associated with preterm birth (PTB) and whether the presence of depressive symptoms and perceived stress modified the association. Data stem from a cohort of 1410 black women residing in Metropolitan Detroit, Michigan, enrolled into the Life-course Influences on Fetal Environments (LIFE) study. The Daily Life Experiences of Racism and Bother (DLE-B) scale measured the frequency and perceived stressfulness of racial micro aggressions experienced during the past year. Severe past-week depressive symptomatology was measured by the Centers for Epidemiologic Studies-Depression scale (CES-D) dichotomized at ≥ 23. Restricted cubic splines were used to model nonlinearity between perceived racism and PTB. We used the Perceived Stress Scale to assess general stress perceptions. Stratified spline regression analysis demonstrated that among those with severe depressive symptoms, perceived racism was not associated with PTB. However, perceived racism was significantly associated with PTB among women with mild to moderate (CES-D score ≤ 22) depressive symptoms. Perceived racism was not associated with PTB among women with or without high amounts of perceived stress. Our findings suggest that racism, at least in the form of racial micro aggressions, may not further impact a group already at high risk for PTB (those with severe depressive symptoms), but may increase the risk of PTB for women at lower baseline risk. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Illumination modelling of a mobile device environment for effective use in driving mobile apps

    NASA Astrophysics Data System (ADS)

    Marhoubi, Asmaa H.; Saravi, Sara; Edirisinghe, Eran A.; Bez, Helmut E.

    2015-05-01

    The present generation of Ambient Light Sensors (ALS) of a mobile handheld device suffer from two practical shortcomings. The ALSs are narrow angle, i.e. they respond effectively only within a narrow angle of operation and there is a latency of operation. As a result mobile applications that operate based on the ALS readings could perform sub-optimally especially when operated in environments with non-uniform illumination. The applications will either adopt with unacceptable levels of latency or/and may demonstrate a discrete nature of operation. In this paper we propose a framework to predict the ambient illumination of an environment in which a mobile device is present. The predictions are based on an illumination model that is developed based on a small number of readings taken during an application calibration stage. We use a machine learning based approach in developing the models. Five different regression models were developed, implemented and compared based on Polynomial, Gaussian, Sum of Sine, Fourier and Smoothing Spline functions. Approaches to remove noisy data, missing values and outliers were used prior to the modelling stage to remove their negative effects on modelling. The prediction accuracy for all models were found to be above 0.99 when measured using R-Squared test with the best performance being from Smoothing Spline. In this paper we will discuss mathematical complexity of each model and investigate how to make compromises in finding the best model.

  6. Estimation of spline function in nonparametric path analysis based on penalized weighted least square (PWLS)

    NASA Astrophysics Data System (ADS)

    Fernandes, Adji Achmad Rinaldo; Solimun, Arisoesilaningsih, Endang

    2017-12-01

    The aim of this research is to estimate the spline in Path Analysis-based on Nonparametric Regression using Penalized Weighted Least Square (PWLS) approach. Approach used is Reproducing Kernel Hilbert Space at sobolev space. Nonparametric path analysis model on the equation y1 i=f1.1(x1 i)+ε1 i; y2 i=f1.2(x1 i)+f2.2(y1 i)+ε2 i; i =1 ,2 ,…,n Nonparametric Path Analysis which meet the criteria of minimizing PWLS min fw .k∈W2m[aw .k,bw .k], k =1 ,2 { (2n ) -1(y˜-f ˜ ) TΣ-1(y ˜-f ˜ ) + ∑k =1 2 ∑w =1 2 λw .k ∫aw .k bw .k [fw.k (m )(xi) ] 2d xi } is f ˜^=Ay ˜ with A=T1(T1TU1-1∑-1T1)-1T1TU1-1∑-1+V1U1-1∑-1[I-T1(T1TU1-1∑-1T1)-1T1TU1-1∑-1] columnalign="left">+T2(T2TU2-1∑-1T2)-1T2TU2-1∑-1+V2U2-1∑-1[I1-T2(T2TU2-1∑-1T2) -1T2TU2-1∑-1

  7. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  8. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    NASA Astrophysics Data System (ADS)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and motors to produce combined arc and straight-line motion. The simulation and experiment show interesting results by demonstrating smooth motion in both acceleration and jerk and significant improvements of positioning accuracy in trajectory planning.

  9. Prostate multimodality image registration based on B-splines and quadrature local energy.

    PubMed

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  10. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    NASA Astrophysics Data System (ADS)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.

  11. A 156 kyr smoothed history of the atmospheric greenhouse gases CO2, CH4, and N2O and their radiative forcing

    NASA Astrophysics Data System (ADS)

    Köhler, Peter; Nehrbass-Ahles, Christoph; Schmitt, Jochen; Stocker, Thomas F.; Fischer, Hubertus

    2017-06-01

    Continuous records of the atmospheric greenhouse gases (GHGs) CO2, CH4, and N2O are necessary input data for transient climate simulations, and their associated radiative forcing represents important components in analyses of climate sensitivity and feedbacks. Since the available data from ice cores are discontinuous and partly ambiguous, a well-documented decision process during data compilation followed by some interpolating post-processing is necessary to obtain those desired time series. Here, we document our best possible data compilation of published ice core records and recent measurements on firn air and atmospheric samples spanning the interval from the penultimate glacial maximum ( ˜ 156 kyr BP) to the beginning of the year 2016 CE. We use the most recent age scales for the ice core data and apply a smoothing spline method to translate the discrete and irregularly spaced data points into continuous time series. These splines are then used to compute the radiative forcing for each GHG using well-established, simple formulations. We compile only a Southern Hemisphere record of CH4 and discuss how much larger a Northern Hemisphere or global CH4 record might have been due to its interpolar difference. The uncertainties of the individual data points are considered in the spline procedure. Based on the given data resolution, time-dependent cutoff periods of the spline, defining the degree of smoothing, are prescribed, ranging from 5000 years for the less resolved older parts of the records to 4 years for the densely sampled recent years. The computed splines seamlessly describe the GHG evolution on orbital and millennial timescales for glacial and glacial-interglacial variations and on centennial and decadal timescales for anthropogenic times. Data connected with this paper, including raw data and final splines, are available at doi:10.1594/PANGAEA.871273.

  12. Use of Longitudinal Data in Genetic Studies in the Genome-wide Association Studies Era: Summary of Group 14

    PubMed Central

    Kerner, Berit; North, Kari E; Fallin, M Daniele

    2010-01-01

    Participants analyzed actual and simulated longitudinal data from the Framingham Heart Study for various metabolic and cardiovascular traits. The genetic information incorporated into these investigations ranged from selected single-nucleotide polymorphisms to genome-wide association arrays. Genotypes were incorporated using a broad range of methodological approaches including conditional logistic regression, linear mixed models, generalized estimating equations, linear growth curve estimation, growth modeling, growth mixture modeling, population attributable risk fraction based on survival functions under the proportional hazards models, and multivariate adaptive splines for the analysis of longitudinal data. The specific scientific questions addressed by these different approaches also varied, ranging from a more precise definition of the phenotype, bias reduction in control selection, estimation of effect sizes and genotype associated risk, to direct incorporation of genetic data into longitudinal modeling approaches and the exploration of population heterogeneity with regard to longitudinal trajectories. The group reached several overall conclusions: 1) The additional information provided by longitudinal data may be useful in genetic analyses. 2) The precision of the phenotype definition as well as control selection in nested designs may be improved, especially if traits demonstrate a trend over time or have strong age-of-onset effects. 3) Analyzing genetic data stratified for high-risk subgroups defined by a unique development over time could be useful for the detection of rare mutations in common multi-factorial diseases. 4) Estimation of the population impact of genomic risk variants could be more precise. The challenges and computational complexity demanded by genome-wide single-nucleotide polymorphism data were also discussed. PMID:19924713

  13. Modeling Heavy/Medium-Duty Fuel Consumption Based on Drive Cycle Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Duran, Adam; Gonder, Jeffrey

    This paper presents multiple methods for predicting heavy/medium-duty vehicle fuel consumption based on driving cycle information. A polynomial model, a black box artificial neural net model, a polynomial neural network model, and a multivariate adaptive regression splines (MARS) model were developed and verified using data collected from chassis testing performed on a parcel delivery diesel truck operating over the Heavy Heavy-Duty Diesel Truck (HHDDT), City Suburban Heavy Vehicle Cycle (CSHVC), New York Composite Cycle (NYCC), and hydraulic hybrid vehicle (HHV) drive cycles. Each model was trained using one of four drive cycles as a training cycle and the other threemore » as testing cycles. By comparing the training and testing results, a representative training cycle was chosen and used to further tune each method. HHDDT as the training cycle gave the best predictive results, because HHDDT contains a variety of drive characteristics, such as high speed, acceleration, idling, and deceleration. Among the four model approaches, MARS gave the best predictive performance, with an average absolute percent error of -1.84% over the four chassis dynamometer drive cycles. To further evaluate the accuracy of the predictive models, the approaches were first applied to real-world data. MARS outperformed the other three approaches, providing an average absolute percent error of -2.2% of four real-world road segments. The MARS model performance was then compared to HHDDT, CSHVC, NYCC, and HHV drive cycles with the performance from Future Automotive System Technology Simulator (FASTSim). The results indicated that the MARS method achieved a comparative predictive performance with FASTSim.« less

  14. Linking multi-temporal satellite imagery to coastal wetland dynamics and bird distribution

    USGS Publications Warehouse

    Pickens, Bradley A.; King, Sammy L.

    2014-01-01

    Ecosystems are characterized by dynamic ecological processes, such as flooding and fires, but spatial models are often limited to a single measurement in time. The characterization of direct, fine-scale processes affecting animals is potentially valuable for management applications, but these are difficult to quantify over broad extents. Direct predictors are also expected to improve transferability of models beyond the area of study. Here, we investigated the ability of non-static and multi-temporal habitat characteristics to predict marsh bird distributions, while testing model generality and transferability between two coastal habitats. Distribution models were developed for king rail (Rallus elegans), common gallinule (Gallinula galeata), least bittern (Ixobrychus exilis), and purple gallinule (Porphyrio martinica) in fresh and intermediate marsh types in the northern Gulf Coast of Louisiana and Texas, USA. For model development, repeated point count surveys of marsh birds were conducted from 2009 to 2011. Landsat satellite imagery was used to quantify both annual conditions and cumulative, multi-temporal habitat characteristics. We used multivariate adaptive regression splines to quantify bird-habitat relationships for fresh, intermediate, and combined marsh habitats. Multi-temporal habitat characteristics ranked as more important than single-date characteristics, as temporary water was most influential in six of eight models. Predictive power was greater for marsh type-specific models compared to general models and model transferability was poor. Birds in fresh marsh selected for annual habitat characterizations, while birds in intermediate marsh selected for cumulative wetness and heterogeneity. Our findings emphasize that dynamic ecological processes can affect species distribution and species-habitat relationships may differ with dominant landscape characteristics.

  15. Evaluation of four supervised learning methods for groundwater spring potential mapping in Khalkhal region (Iran) using GIS-based features

    NASA Astrophysics Data System (ADS)

    Naghibi, Seyed Amir; Moradi Dashtpagerdi, Mostafa

    2017-01-01

    One important tool for water resources management in arid and semi-arid areas is groundwater potential mapping. In this study, four data-mining models including K-nearest neighbor (KNN), linear discriminant analysis (LDA), multivariate adaptive regression splines (MARS), and quadric discriminant analysis (QDA) were used for groundwater potential mapping to get better and more accurate groundwater potential maps (GPMs). For this purpose, 14 groundwater influence factors were considered, such as altitude, slope angle, slope aspect, plan curvature, profile curvature, slope length, topographic wetness index (TWI), stream power index, distance from rivers, river density, distance from faults, fault density, land use, and lithology. From 842 springs in the study area, in the Khalkhal region of Iran, 70 % (589 springs) were considered for training and 30 % (253 springs) were used as a validation dataset. Then, KNN, LDA, MARS, and QDA models were applied in the R statistical software and the results were mapped as GPMs. Finally, the receiver operating characteristics (ROC) curve was implemented to evaluate the performance of the models. According to the results, the area under the curve of ROCs were calculated as 81.4, 80.5, 79.6, and 79.2 % for MARS, QDA, KNN, and LDA, respectively. So, it can be concluded that the performances of KNN and LDA were acceptable and the performances of MARS and QDA were excellent. Also, the results depicted high contribution of altitude, TWI, slope angle, and fault density, while plan curvature and land use were seen to be the least important factors.

  16. Parameter Uncertainty on AGCM-simulated Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    He, F.

    2015-12-01

    This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.

  17. Quiet Clean Short-Haul Experimental Engine (QCSEE) ball spline pitch-change mechanism whirligig test report

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The component testing of a ball spline variable pitch mechanism is described including a whirligig test. The variable pitch actuator successfully completed all planned whirligig tests including a fifty cycle endurance test at actuation rates up to 125 deg per second at up to 102 percent fan speed (3400 rpm).

  18. A spline-based parameter and state estimation technique for static models of elastic surfaces

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Daniel, P. L.; Armstrong, E. S.

    1983-01-01

    Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.

  19. 78 FR 12988 - Airworthiness Directives; Airbus Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-26

    ... investigations have also concluded that A320 family THSA P/N 47145-XXX (where XXX stands for any numerical [[Page... ballscrew lower splines of THSAs having P/N 47145-XXX to detect corrosion and, depending on findings, the... tie-rod splines on any THSA having P/N 47145-XXX (where XXX stands for any numerical value) to...

  20. Monotonicity preserving splines using rational cubic Timmer interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  1. The use of an analytic Hamiltonian matrix for solving the hydrogenic atom

    NASA Astrophysics Data System (ADS)

    Bhatti, Mohammad

    2001-10-01

    The non-relativistic Hamiltonian corresponding to the Shrodinger equation is converted into analytic Hamiltonian matrix using the kth order B-splines functions. The Galerkin method is applied to the solution of the Shrodinger equation for bound states of hydrogen-like systems. The program Mathematica is used to create analytic matrix elements and exact integration is performed over the knot-sequence of B-splines and the resulting generalized eigenvalue problem is solved on a specified numerical grid. The complete basis set and the energy spectrum is obtained for the coulomb potential for hydrogenic systems with Z less than 100 with B-splines of order eight. Another application is given to test the Thomas-Reiche-Kuhn sum rule for the hydrogenic systems.

  2. Assessing the potential impacts of a changing climate on the distribution of a rabies virus vector

    PubMed Central

    Piaggio, Antoinette J.

    2018-01-01

    Common vampire bats (Desmodus rotundus) occur throughout much of South America to northern México. Vampire bats have not been documented in recent history in the United States, but have been documented within about 50 km of the U.S. state of Texas. Vampire bats feed regularly on the blood of mammals and can transmit rabies virus to native species and livestock, causing impacts on the health of prey. Thus cattle producers, wildlife management agencies, and other stakeholders have expressed concerns about whether vampire bats might spread into the southern United States. On the other hand, concerns about vampire-borne rabies can also result in wanton destruction at bat roosts in areas occupied by vampire bats, but also in areas not known to be occupied by this species. This can in turn negatively affect some bat roosts, populations, and species that are of conservation concern, including vampire bats. To better understand the current and possible future distribution of vampire bats in North America and help mitigate future cattle management problems, we used 7,094 vampire bat occurrence records from North America and species distribution modeling (SDM) to map the potential distribution of vampire bats in North America under current and future climate change scenarios. We analysed and mapped the potential distribution of this species using 5 approaches to species distribution modeling: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy. We then projected these models into 17 “worst-case” future climate scenarios for year 2070 to generate hypotheses about how the vampire bat distribution in North America might change in the future. Of the variables used in this analysis, minimum temperature of the coldest month had the highest variable importance using all 5 SDM approaches. These results suggest two potential near-future routes of vampire bat dispersal into the U.S., one via southern Texas, and a second into southern Florida. Some of our SDM models support the hypothesis that suitable habitat for vampire bats may currently exist in parts of the México–U.S. borderlands, including extreme southern portions of Texas, as well as in southern Florida. However, this analysis also suggests that extensive expansion into the south-eastern and south-western U.S. over the coming ~60 years appears unlikely. PMID:29466401

  3. Assessing the potential impacts of a changing climate on the distribution of a rabies virus vector.

    PubMed

    Hayes, Mark A; Piaggio, Antoinette J

    2018-01-01

    Common vampire bats (Desmodus rotundus) occur throughout much of South America to northern México. Vampire bats have not been documented in recent history in the United States, but have been documented within about 50 km of the U.S. state of Texas. Vampire bats feed regularly on the blood of mammals and can transmit rabies virus to native species and livestock, causing impacts on the health of prey. Thus cattle producers, wildlife management agencies, and other stakeholders have expressed concerns about whether vampire bats might spread into the southern United States. On the other hand, concerns about vampire-borne rabies can also result in wanton destruction at bat roosts in areas occupied by vampire bats, but also in areas not known to be occupied by this species. This can in turn negatively affect some bat roosts, populations, and species that are of conservation concern, including vampire bats. To better understand the current and possible future distribution of vampire bats in North America and help mitigate future cattle management problems, we used 7,094 vampire bat occurrence records from North America and species distribution modeling (SDM) to map the potential distribution of vampire bats in North America under current and future climate change scenarios. We analysed and mapped the potential distribution of this species using 5 approaches to species distribution modeling: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy. We then projected these models into 17 "worst-case" future climate scenarios for year 2070 to generate hypotheses about how the vampire bat distribution in North America might change in the future. Of the variables used in this analysis, minimum temperature of the coldest month had the highest variable importance using all 5 SDM approaches. These results suggest two potential near-future routes of vampire bat dispersal into the U.S., one via southern Texas, and a second into southern Florida. Some of our SDM models support the hypothesis that suitable habitat for vampire bats may currently exist in parts of the México-U.S. borderlands, including extreme southern portions of Texas, as well as in southern Florida. However, this analysis also suggests that extensive expansion into the south-eastern and south-western U.S. over the coming ~60 years appears unlikely.

  4. Nonparametric regression applied to quantitative structure-activity relationships

    PubMed

    Constans; Hirst

    2000-03-01

    Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.

  5. Novel Analog For Muscle Deconditioning

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Bloomberg, Jacob

    2010-01-01

    Existing models of muscle deconditioning are cumbersome and expensive (ex: bedrest). We propose a new model utilizing a weighted suit to manipulate strength, power or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre- and post-flight astronaut performance data using the same tasks. Spline regression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of: leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/ BW of 79 J/kg, knee extension (KE) isokinetic/BW of 6 Nm/Kg and KE torque/BW of 1.9 Nm/kg. Conclusions: Laboratory manipulation of strength / BW has promise as an appropriate analog for spaceflight-induced loss of muscle function for predicting occupational task performance and establishing operationally relevant exercise targets.

  6. Observed Decrease of North American Winter Temperature Variability

    NASA Astrophysics Data System (ADS)

    Rhines, A. N.; Tingley, M.; McKinnon, K. A.; Huybers, P. J.

    2015-12-01

    There is considerable interest in determining whether temperature variability has changed in recent decades. Model ensembles project that extratropical land temperature variance will detectably decrease by 2070. We use quantile regression of station observations to show that decreasing variability is already robustly detectable for North American winter during 1979--2014. Pointwise trends from GHCND stations are mapped into a continuous spatial field using thin-plate spline regression, resolving small-scales while providing uncertainties accounting for spatial covariance and varying station density. We find that variability of daily temperatures, as measured by the difference between the 95th and 5th percentiles, has decreased markedly in winter for both daily minima and maxima. Composites indicate that the reduced spread of winter temperatures primarily results from Arctic amplification decreasing the meridional temperature gradient. Greater observed warming in the 5th relative to the 95th percentile stems from asymmetric effects of advection during cold versus warm days; cold air advection is generally from northerly regions that have experienced greater warming than western or southwestern regions that are generally sourced during warm days.

  7. Sieve estimation of Cox models with latent structures.

    PubMed

    Cao, Yongxiu; Huang, Jian; Liu, Yanyan; Zhao, Xingqiu

    2016-12-01

    This article considers sieve estimation in the Cox model with an unknown regression structure based on right-censored data. We propose a semiparametric pursuit method to simultaneously identify and estimate linear and nonparametric covariate effects based on B-spline expansions through a penalized group selection method with concave penalties. We show that the estimators of the linear effects and the nonparametric component are consistent. Furthermore, we establish the asymptotic normality of the estimator of the linear effects. To compute the proposed estimators, we develop a modified blockwise majorization descent algorithm that is efficient and easy to implement. Simulation studies demonstrate that the proposed method performs well in finite sample situations. We also use the primary biliary cirrhosis data to illustrate its application. © 2016, The International Biometric Society.

  8. Mandibular transformations in prepubertal patients following treatment for craniofacial microsomia: thin-plate spline analysis.

    PubMed

    Hay, A D; Singh, G D

    2000-01-01

    To analyze correction of mandibular deformity using an inverted L osteotomy and autogenous bone graft in patients exhibiting unilateral craniofacial microsomia (CFM), thin-plate spline analysis was undertaken. Preoperative, early postoperative, and approximately 3.5-year postoperative posteroanterior cephalographs of 15 children (age 10+/-3 years) with CFM were scanned, and eight homologous mandibular landmarks digitized. Average mandibular geometries, scaled to an equivalent size, were generated using Procrustes superimposition. Results indicated that the mean pre- and postoperative mandibular configurations differed statistically (P<0.05). Thin-plate spline analysis indicated that the total spline (Cartesian transformation grid) of the pre- to early postoperative configuration showed mandibular body elongation on the treated side and inferior symphyseal displacement. The affine component of the total spline revealed a clockwise rotation of the preoperative configuration, whereas the nonaffine component was responsible for ramus, body, and symphyseal displacements. The transformation grid for the early and late postoperative comparison showed bilateral ramus elongation. A superior symphyseal displacement contrasted with its earlier inferior displacement, the affine component had translocated the symphyseal landmarks towards the midline. The nonaffine component demonstrated bilateral ramus lengthening, and partial warps suggested that these elongations were slightly greater on the nontreated side. The affine component of the pre- and late postoperative comparison also demonstrated a clockwise rotation. The nonaffine component produced the bilateral ramus elongations-the nontreated side ramus lengthening slightly more than the treated side. It is concluded that an inverted L osteotomy improves mandibular morphology significantly in CFM patients and permits continued bilateral ramus growth. Copyright 2000 Wiley-Liss, Inc.

  9. Conditional adaptive Bayesian spectral analysis of nonstationary biomedical time series.

    PubMed

    Bruce, Scott A; Hall, Martica H; Buysse, Daniel J; Krafty, Robert T

    2018-03-01

    Many studies of biomedical time series signals aim to measure the association between frequency-domain properties of time series and clinical and behavioral covariates. However, the time-varying dynamics of these associations are largely ignored due to a lack of methods that can assess the changing nature of the relationship through time. This article introduces a method for the simultaneous and automatic analysis of the association between the time-varying power spectrum and covariates, which we refer to as conditional adaptive Bayesian spectrum analysis (CABS). The procedure adaptively partitions the grid of time and covariate values into an unknown number of approximately stationary blocks and nonparametrically estimates local spectra within blocks through penalized splines. CABS is formulated in a fully Bayesian framework, in which the number and locations of partition points are random, and fit using reversible jump Markov chain Monte Carlo techniques. Estimation and inference averaged over the distribution of partitions allows for the accurate analysis of spectra with both smooth and abrupt changes. The proposed methodology is used to analyze the association between the time-varying spectrum of heart rate variability and self-reported sleep quality in a study of older adults serving as the primary caregiver for their ill spouse. © 2017, The International Biometric Society.

  10. Rotation and scale invariant shape context registration for remote sensing images with background variations

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Zhang, Shumei; Cao, Shixiang

    2015-01-01

    Multitemporal remote sensing images generally suffer from background variations, which significantly disrupt traditional region feature and descriptor abstracts, especially between pre and postdisasters, making registration by local features unreliable. Because shapes hold relatively stable information, a rotation and scale invariant shape context based on multiscale edge features is proposed. A multiscale morphological operator is adapted to detect edges of shapes, and an equivalent difference of Gaussian scale space is built to detect local scale invariant feature points along the detected edges. Then, a rotation invariant shape context with improved distance discrimination serves as a feature descriptor. For a distance shape context, a self-adaptive threshold (SAT) distance division coordinate system is proposed, which improves the discriminative property of the feature descriptor in mid-long pixel distances from the central point while maintaining it in shorter ones. To achieve rotation invariance, the magnitude of Fourier transform in one-dimension is applied to calculate angle shape context. Finally, the residual error is evaluated after obtaining thin-plate spline transformation between reference and sensed images. Experimental results demonstrate the robustness, efficiency, and accuracy of this automatic algorithm.

  11. Corrective 111 In Capromab Pendetide SPECT Image Reconstruction Methods for Improved Detection of Recurrent Prostate Cancer

    DTIC Science & Technology

    2006-06-01

    Lalush, and T. BMW, Modeling Respiratory Mechanics in the MCAT and 30 Spline-Based MCAT Phantoms. IEEE Trans Nucl Sci., 2001. 38. Segars, W.P., D.S...Lalush, and B.M.W. Tsui. Modeling Respiratory Mechanics in the MCAT and Spline-Based MCAT Phantoms. in Conference Record of the 1999 IEEE Nuclear

  12. Gravity anomaly map of Mars and Moon and analysis of Venus gravity field: New analysis procedures

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The technique of harmonic splines allows direct estimation of a complete planetary gravity field (geoid, gravity, and gravity gradients) everywhere over the planet's surface. Harmonic spline results of Venus are presented as a series of maps at spacecraft and constant altitudes. Global (except for polar regions) and local relations of gravity to topography are described.

  13. Spline smoothing of histograms by linear programming

    NASA Technical Reports Server (NTRS)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  14. Alignment of large image series using cubic B-splines tessellation: application to transmission electron microscopy data.

    PubMed

    Dauguet, Julien; Bock, Davi; Reid, R Clay; Warfield, Simon K

    2007-01-01

    3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.

  15. A Locally Modal B-Spline Based Full-Vector Finite-Element Method with PML for Nonlinear and Lossy Plasmonic Waveguide

    NASA Astrophysics Data System (ADS)

    Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan

    2016-09-01

    In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.

  16. Thin-plate spline analysis of the effects of face mask treatment in children with maxillary retrognathism.

    PubMed

    Chang, Jenny Zwei-Chieng; Liu, Pao-Hsin; Chen, Yi-Jane; Yao, Jane Chung-Chen; Chang, Hong-Po; Chang, Chih-Han; Chang, Frank Hsin-Fu

    2006-02-01

    Face mask therapy is indicated for growing patients who suffer from maxillary retrognathia. Most previous studies used conventional cephalometric analysis to evaluate the effects of face mask treatment. Cephalometric analysis has been shown to be insufficient for complex craniofacial configurations. The purpose of this study was to investigate changes in the craniofacial structure of children with maxillary retrognathism following face mask treatment by means of thin-plate spline analysis. Thirty children with skeletal Class III malocclusions who had been treated with face masks were compared with a group of 30 untreated gender-matched, age-matched, observation period-matched, and craniofacial configuration-matched subjects. Average geometries, scaled to an equivalent size, were generated by means of Procrustes analysis. Thin-plate spline analysis was then performed for localization of the shape changes. Face mask treatment induced a forward displacement of the maxilla, a counterclockwise rotation of the palatal plane, a horizontal compression of the anterior border of the symphysis and the condylar region, and a downward deformation of the menton. The cranial base exhibited a counterclockwise deformation as a whole. We conclude that thin-plate spline analysis is a valuable supplement to conventional cephalometric analysis.

  17. Noise correction on LANDSAT images using a spline-like algorithm

    NASA Technical Reports Server (NTRS)

    Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.

    1985-01-01

    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.

  18. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  19. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  20. Reverse engineering physical models employing a sensor integration between 3D stereo detection and contact digitization

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Lin, Grier C. I.

    1997-12-01

    A vision-drive automatic digitization process for free-form surface reconstruction has been developed, with a coordinate measurement machine (CMM) equipped with a touch-triggered probe and a CCD camera, in reverse engineering physical models. The process integrates 3D stereo detection, data filtering, Delaunay triangulation, adaptive surface digitization into a single process of surface reconstruction. By using this innovative approach, surface reconstruction can be implemented automatically and accurately. Least-squares B- spline surface models with the controlled accuracy of digitization can be generated for further application in product design and manufacturing processes. One industrial application indicates that this approach is feasible, and the processing time required in reverse engineering process can be significantly reduced up to more than 85%.

  1. Method for Manufacturing Bulk Metallic Glass-Based Strain Wave Gear Components

    NASA Technical Reports Server (NTRS)

    Hofmann, Douglas C. (Inventor); Wilcox, Brian H. (Inventor)

    2017-01-01

    Systems and methods in accordance with embodiments of the invention implement bulk metallic glass-based strain wave gears and strain wave gear components. In one embodiment, a method of fabricating a strain wave gear includes: shaping a BMG-based material using a mold in conjunction with one of a thermoplastic forming technique and a casting technique; where the BMG-based material is shaped into one of: a wave generator plug, an inner race, an outer race, a rolling element, a flexspline, a flexspline without a set of gear teeth, a circular spline, a circular spline without a set of gear teeth, a set of gear teeth to be incorporated within a flexspline, and a set of gear teeth to be incorporated within a circular spline.

  2. Data approximation using a blending type spline construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less

  3. Inversion of the strain-life and strain-stress relationships for use in metal fatigue analysis

    NASA Technical Reports Server (NTRS)

    Manson, S. S.

    1979-01-01

    The paper presents closed-form solutions (collocation method and spline-function method) for the constants of the cyclic fatigue life equation so that they can be easily incorporated into cumulative damage analysis. The collocation method involves conformity with the experimental curve at specific life values. The spline-function method is such that the basic life relation is expressed as a two-part function, one applicable at strains above the transition strain (strain at intersection of elastic and plastic lines), the other below. An illustrative example is treated by both methods. It is shown that while the collocation representation has the advantage of simplicity of form, the spline-function representation can be made more accurate over a wider life range, and is simpler to use.

  4. Analysis of harmonic spline gravity models for Venus and Mars

    NASA Technical Reports Server (NTRS)

    Bowin, Carl

    1986-01-01

    Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.

  5. Hierarchical and successive approximate registration of the non-rigid medical image based on thin-plate splines

    NASA Astrophysics Data System (ADS)

    Hu, Jinyan; Li, Li; Yang, Yunfeng

    2017-06-01

    The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.

  6. Association of Brain-Derived Neurotrophic Factor and Vitamin D with Depression and Obesity: A Population-Based Study.

    PubMed

    Goltz, Annemarie; Janowitz, Deborah; Hannemann, Anke; Nauck, Matthias; Hoffmann, Johanna; Seyfart, Tom; Völzke, Henry; Terock, Jan; Grabe, Hans Jörgen

    2018-06-19

    Depression and obesity are widespread and closely linked. Brain-derived neurotrophic factor (BDNF) and vitamin D are both assumed to be associated with depression and obesity. Little is known about the interplay between vitamin D and BDNF. We explored the putative associations and interactions between serum BDNF and vitamin D levels with depressive symptoms and abdominal obesity in a large population-based cohort. Data were obtained from the population-based Study of Health in Pomerania (SHIP)-Trend (n = 3,926). The associations of serum BDNF and vitamin D levels with depressive symptoms (measured using the Patient Health Questionnaire) were assessed with binary and multinomial logistic regression models. The associations of serum BDNF and vitamin D levels with obesity (measured by the waist-to-hip ratio [WHR]) were assessed with binary logistic and linear regression models with restricted cubic splines. Logistic regression models revealed inverse associations of vitamin D with depression (OR = 0.966; 95% CI 0.951-0.981) and obesity (OR = 0.976; 95% CI 0.967-0.985). No linear association of serum BDNF with depression or obesity was found. However, linear regression models revealed a U-shaped association of BDNF with WHR (p < 0.001). Vitamin D was inversely associated with depression and obesity. BDNF was associated with abdominal obesity, but not with depression. At the population level, our results support the relevant roles of vitamin D and BDNF in mental and physical health-related outcomes. © 2018 S. Karger AG, Basel.

  7. Development Program for Field-Repairable/Expendable Main Rotor Blades

    DTIC Science & Technology

    1976-09-01

    honeycomb aft 2, and it represents the most cost- core, and extruded aluminum alloy effective approach to a repairable trailing-edge spline (Reference...materials lend themselves to relatively inexpensive fabrication techniques, the questionable torsional stiffness of composite spars eliminated them...values of the fatigue strength of aluminum , the spline and aft doublers are predicted to have a negative margin of safety for infinite life. The

  8. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  9. Feed intake of sheep as affected by body weight, breed, sex, and feed composition.

    PubMed

    Lewis, R M; Emmans, G C

    2010-02-01

    The hypotheses tested were that genetic size-scaling for mature BW (A, kg) would reduce variation in intake between kinds of sheep and that quadratic polynomials on u = BW/A with zero intercept would provide good descriptions of the relationship between scaled intake (SI, g/A(0.73) d) and degree of maturity in BW (u) across feeds of differing quality. Both sexes of Suffolk sheep from 2 experimental lines (n = 225) and from 3 breed types (Suffolk, Scottish Blackface, and their cross; n = 149) were recorded weekly for ad libitum feed intake and BW; recording of intake was from weaning through, in some cases, near maturity. Six diets of different quality were fed ad libitum. The relationship between intake and BW on a given feed varied considerably between kinds of sheep. Much, but not all, of that variation was removed by genetic size-scaling. In males, the maximum value of SI was greater than in females (P = 0.07) and was greater in Suffolk than in Scottish Blackface, with the cross intermediate (P = 0.025); there was no difference between the 2 Suffolk lines used (P = 0.106). The quadratic polynomial model, through the origin, was compared with a split-line (spline) regression for describing how SI varied with u. For the spline model, the intercept was not different from zero in any case (P > 0.05). The values of u at which SI achieved its maximum value (u* and SI*) were calculated. Both models fit the data well; the quadratic was preferred because it predicted that SI* would be achieved within the range of the long-run data, as was observed. On a high quality feed, for the spline regression, u* varied little around 0.434 (SD = 0.020) for the 10 different kinds of sheep used. For the quadratic, the mean value of 0.643 (SD = 0.066) was more variable, but there were no consistent effects of kind of sheep. The values of u* and SI* estimated using the quadratic model varied among the 6 feeds: 0.643 and 78.5 on high quality; 0.760 and 79.6 on medium protein content; 0.859 and 73.3 on low protein content; 0.756 and 112 on a low energy content feed; 0.937 and 107 on ryegrass; and 1 (forced, as the fitted value of 1.11 was infeasible) and 135 on Lucerne. The value of u* tended to increase as feed digestibility decreased. We conclude that genetic size-scaling of intake is useful and that a quadratic polynomial with zero intercept provides a good description of the relationship between SI and u for different kinds of sheep on feeds of different quality. Up to u congruent with 0.45, intake was directly proportional to BW.

  10. Sinogram restoration for ultra-low-dose x-ray multi-slice helical CT by nonparametric regression

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Siddiqui, Khan; Zhu, Bin; Tao, Yang; Siegel, Eliot

    2007-03-01

    During the last decade, x-ray computed tomography (CT) has been applied to screen large asymptomatic smoking and nonsmoking populations for early lung cancer detection. Because a larger population will be involved in such screening exams, more and more attention has been paid to studying low-dose, even ultra-low-dose x-ray CT. However, reducing CT radiation exposure will increase noise level in the sinogram, thereby degrading the quality of reconstructed CT images as well as causing more streak artifacts near the apices of the lung. Thus, how to reduce the noise levels and streak artifacts in the low-dose CT images is becoming a meaningful topic. Since multi-slice helical CT has replaced conventional stop-and-shoot CT in many clinical applications, this research mainly focused on the noise reduction issue in multi-slice helical CT. The experiment data were provided by Siemens SOMATOM Sensation 16-Slice helical CT. It included both conventional CT data acquired under 120 kvp voltage and 119 mA current and ultra-low-dose CT data acquired under 120 kvp and 10 mA protocols. All other settings are the same as that of conventional CT. In this paper, a nonparametric smoothing method with thin plate smoothing splines and the roughness penalty was proposed to restore the ultra-low-dose CT raw data. Each projection frame was firstly divided into blocks, and then the 2D data in each block was fitted to a thin-plate smoothing splines' surface via minimizing a roughness-penalized least squares objective function. By doing so, the noise in each ultra-low-dose CT projection was reduced by leveraging the information contained not only within each individual projection profile, but also among nearby profiles. Finally the restored ultra-low-dose projection data were fed into standard filtered back projection (FBP) algorithm to reconstruct CT images. The rebuilt results as well as the comparison between proposed approach and traditional method were given in the results and discussions section, and showed effectiveness of proposed thin-plate based nonparametric regression method.

  11. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    PubMed

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p < 0.001) than the one obtained using the best solution proposed in our previous work. When applied to align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p < 0.001). The consequences of the temporal alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  12. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  13. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  14. Do factors related to combustion-based sources explain ...

    EPA Pesticide Factsheets

    Introduction: Spatial heterogeneity of effect estimates in associations between PM2.5 and total non-accidental mortality (TNA) in the United States (US), is an issue in epidemiology. This study uses rate ratios generated from the Multi-City/Multi-Pollutant study (1999-2005) for 313 core-based statistical areas (CBSA) and their metropolitan divisions (MD) to examine combustion-based sources of heterogeneity.Methods: For CBSA/MDs, area-specific log rate ratios (betas) were derived from a model adjusting for time, an interaction with age-group, day of week, and natural splines of current temperature, current dew point, and unconstrained temperature at lags 1, 2, and 3. We assessed the heterogeneity in the betas by linear regression with inverse variance weights, using average NO2, SO2, and CO, which may act as a combustion source proxy, and these pollutants’ correlations with PM2.5. Results: We found that weighted mean PM2.5 association (0.96 percent increase in total non-accidental mortality for a 10 µg/m3 increment in PM2.5) increased by 0.26 (95% confidence interval 0.08 , 0.44) for an interquartile change (0.2) in the correlation of SO2 and PM2.5., but betas showed less dependence on the annual averages of SO2 or NO2. Spline analyses suggest departures from linearity, particularly in a model that examined correlations between PM2.5 and CO.Conclusions: We conclude that correlations between SO2 and PM2.5 as an indicator of combustion sources explains some hete

  15. Dose-Response Association Between Physical Activity and Incident Hypertension: A Systematic Review and Meta-Analysis of Cohort Studies.

    PubMed

    Liu, Xuejiao; Zhang, Dongdong; Liu, Yu; Sun, Xizhuo; Han, Chengyi; Wang, Bingyuan; Ren, Yongcheng; Zhou, Junmei; Zhao, Yang; Shi, Yuanyuan; Hu, Dongsheng; Zhang, Ming

    2017-05-01

    Despite the inverse association between physical activity (PA) and incident hypertension, a comprehensive assessment of the quantitative dose-response association between PA and hypertension has not been reported. We performed a meta-analysis, including dose-response analysis, to quantitatively evaluate this association. We searched PubMed and Embase databases for articles published up to November 1, 2016. Random effects generalized least squares regression models were used to assess the quantitative association between PA and hypertension risk across studies. Restricted cubic splines were used to model the dose-response association. We identified 22 articles (29 studies) investigating the risk of hypertension with leisure-time PA or total PA, including 330 222 individuals and 67 698 incident cases of hypertension. The risk of hypertension was reduced by 6% (relative risk, 0.94; 95% confidence interval, 0.92-0.96) with each 10 metabolic equivalent of task h/wk increment of leisure-time PA. We found no evidence of a nonlinear dose-response association of PA and hypertension ( P nonlinearity =0.094 for leisure-time PA and 0.771 for total PA). With the linear cubic spline model, when compared with inactive individuals, for those who met the guidelines recommended minimum level of moderate PA (10 metabolic equivalent of task h/wk), the risk of hypertension was reduced by 6% (relative risk, 0.94; 95% confidence interval, 0.92-0.97). This meta-analysis suggests that additional benefits for hypertension prevention occur as the amount of PA increases. © 2017 American Heart Association, Inc.

  16. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  17. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  18. Tensorial Basis Spline Collocation Method for Poisson's Equation

    NASA Astrophysics Data System (ADS)

    Plagne, Laurent; Berthou, Jean-Yves

    2000-01-01

    This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.

  19. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  20. A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid

    NASA Astrophysics Data System (ADS)

    Sulaimanov, Z. M.; Shumilov, B. M.

    2017-10-01

    For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.

Top