NASA Technical Reports Server (NTRS)
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
Technology Transfer Automated Retrieval System (TEKTRAN)
Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
NASA Astrophysics Data System (ADS)
Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza
2014-10-01
The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of
[Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].
Vanegas, Jairo; Vásquez, Fabián
2016-12-19
Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008.
Nieto, P J García; Antón, J C Álvarez; Vilán, J A Vilán; García-Gonzalo, E
2015-05-01
The aim of this research work is to build a regression model of air quality by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (northern Spain) at a local scale. To accomplish the objective of this study, the experimental data set made up of nitrogen oxides (NO x ), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3), and dust (PM10) was collected over 3 years (2006-2008). The US National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the MARS technique, conclusions of this research work are exposed.
1991-09-01
GRAFSTAT from IBM Research; I am grateful to Dr . Peter Welch for supplying GRAFSTAT. To P.A.W. Lewis, Thank you for your support, confidence and...34Multivariate Adaptive Regression Splines", Annals of Statistics, v. 19, no. 2, pp. 1-142, 1991. Geib , A., Applied Optimal Estimation, M.I.T. Press, Cambridge
Structural break detection method based on the Adaptive Regression Splines technique
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Zimroz, Radosław
2017-04-01
For many real data, long term observation consists of different processes that coexist or occur one after the other. Those processes very often exhibit different statistical properties and thus before the further analysis the observed data should be segmented. This problem one can find in different applications and therefore new segmentation techniques have been appeared in the literature during last years. In this paper we propose a new method of time series segmentation, i.e. extraction from the analysed vector of observations homogeneous parts with similar behaviour. This method is based on the absolute deviation about the median of the signal and is an extension of the previously proposed techniques also based on the simple statistics. In this paper we introduce the method of structural break point detection which is based on the Adaptive Regression Splines technique, one of the form of regression analysis. Moreover we propose also the statistical test which allows testing hypothesis of behaviour related to different regimes. First, the methodology we apply to the simulated signals with different distributions in order to show the effectiveness of the new technique. Next, in the application part we analyse the real data set that represents the vibration signal from a heavy duty crusher used in a mineral processing plant.
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
Quirós, Elia; Felicísimo, Ángel M.; Cuartero, Aurora
2009-01-01
This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant cha...
Ghasemi, Jahan B; Zolfonoun, Ehsan
2013-11-01
A new multicomponent analysis method, based on principal component analysis-multivariate adaptive regression splines (PC-MARS) is proposed for the determination of dialkyltin compounds. In Tween-20 micellar media, dimethyl and dibutyltin react with morin to give fluorescent complexes with the maximum emission peaks at 527 and 520nm, respectively. The spectrofluorimetric matrix data, before building the MARS models, were subjected to principal component analysis and decomposed to PC scores as starting points for the MARS algorithm. The algorithm classifies the calibration data into several groups, in each a regression line or hyperplane is fitted. Performances of the proposed methods were tested in term of root mean square errors of prediction (RMSEP), using synthetic solutions. The results show the strong potential of PC-MARS, as a multivariate calibration method, to be applied to spectral data for multicomponent determinations. The effect of different experimental parameters on the performance of the method were studied and discussed. The prediction capability of the proposed method compared with GC-MS method for determination of dimethyltin and/or dibutyltin.
NASA Astrophysics Data System (ADS)
Ghasemi, Jahan B.; Zolfonoun, Ehsan
2013-11-01
A new multicomponent analysis method, based on principal component analysis-multivariate adaptive regression splines (PC-MARS) is proposed for the determination of dialkyltin compounds. In Tween-20 micellar media, dimethyl and dibutyltin react with morin to give fluorescent complexes with the maximum emission peaks at 527 and 520 nm, respectively. The spectrofluorimetric matrix data, before building the MARS models, were subjected to principal component analysis and decomposed to PC scores as starting points for the MARS algorithm. The algorithm classifies the calibration data into several groups, in each a regression line or hyperplane is fitted. Performances of the proposed methods were tested in term of root mean square errors of prediction (RMSEP), using synthetic solutions. The results show the strong potential of PC-MARS, as a multivariate calibration method, to be applied to spectral data for multicomponent determinations. The effect of different experimental parameters on the performance of the method were studied and discussed. The prediction capability of the proposed method compared with GC-MS method for determination of dimethyltin and/or dibutyltin.
NASA Astrophysics Data System (ADS)
Emamgolizadeh, S.; Bateni, S. M.; Shahsavani, D.; Ashrafi, T.; Ghorbani, H.
2015-10-01
The soil cation exchange capacity (CEC) is one of the main soil chemical properties, which is required in various fields such as environmental and agricultural engineering as well as soil science. In situ measurement of CEC is time consuming and costly. Hence, numerous studies have used traditional regression-based techniques to estimate CEC from more easily measurable soil parameters (e.g., soil texture, organic matter (OM), and pH). However, these models may not be able to adequately capture the complex and highly nonlinear relationship between CEC and its influential soil variables. In this study, Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS) were employed to estimate CEC from more readily measurable soil physical and chemical variables (e.g., OM, clay, and pH) by developing functional relations. The GEP- and MARS-based functional relations were tested at two field sites in Iran. Results showed that GEP and MARS can provide reliable estimates of CEC. Also, it was found that the MARS model (with root-mean-square-error (RMSE) of 0.318 Cmol+ kg-1 and correlation coefficient (R2) of 0.864) generated slightly better results than the GEP model (with RMSE of 0.270 Cmol+ kg-1 and R2 of 0.807). The performance of GEP and MARS models was compared with two existing approaches, namely artificial neural network (ANN) and multiple linear regression (MLR). The comparison indicated that MARS and GEP outperformed the MLP model, but they did not perform as good as ANN. Finally, a sensitivity analysis was conducted to determine the most and the least influential variables affecting CEC. It was found that OM and pH have the most and least significant effect on CEC, respectively.
Xu, A; Zhang, Y; Ran, T; Liu, H; Lu, S; Xu, J; Xiong, X; Jiang, Y; Lu, T; Chen, Y
2015-01-01
Bruton's tyrosine kinase (BTK) plays a crucial role in B-cell activation and development, and has emerged as a new molecular target for the treatment of autoimmune diseases and B-cell malignancies. In this study, two- and three-dimensional quantitative structure-activity relationship (2D and 3D-QSAR) analyses were performed on a series of pyridine and pyrimidine-based BTK inhibitors by means of genetic algorithm optimized multivariate adaptive regression spline (GA-MARS) and comparative molecular similarity index analysis (CoMSIA) methods. Here, we propose a modified MARS algorithm to develop 2D-QSAR models. The top ranked models showed satisfactory statistical results (2D-QSAR: Q(2) = 0.884, r(2) = 0.929, r(2)pred = 0.878; 3D-QSAR: q(2) = 0.616, r(2) = 0.987, r(2)pred = 0.905). Key descriptors selected by 2D-QSAR were in good agreement with the conclusions of 3D-QSAR, and the 3D-CoMSIA contour maps facilitated interpretation of the structure-activity relationship. A new molecular database was generated by molecular fragment replacement (MFR) and further evaluated with GA-MARS and CoMSIA prediction. Twenty-five pyridine and pyrimidine derivatives as novel potential BTK inhibitors were finally selected for further study. These results also demonstrated that our method can be a very efficient tool for the discovery of novel potent BTK inhibitors.
Modeling of topographic effects on Antarctic sea ice using multivariate adaptive regression splines
NASA Technical Reports Server (NTRS)
De Veaux, Richard D.; Gordon, Arnold L.; Comiso, Joey C.; Bacherer, Nadine E.
1993-01-01
The role of seafloor topography in the spatial variations of the southern ocean sea ice cover as observed (every other day) by the Nimbus 7 scanning multichannel microwave radiometer satellite in the years 1980, 1983, and 1984 is studied. Bottom bathymetry can affect sea ice surface characteristics because of the basically barotropic circulation of the ocean south of the Antarctic Circumpolar current. The main statistical tool used to quantify this effect is a local nonparametric regression model of sea ice concentration as a function of the depth and its first two derivatives in both meridional and zonal directions. First, we model the relationship of bathymetry to sea ice concentration in two sudy areas, one over the Maud Rise and the other over the Ross Sea shelf region. The multiple correlation coefficient is found to average 44% in the Maud Rise study area and 62% in the Ross Sea study area over the years 1980, 1983, and 1984. Second, a strategy of dividing the entire Antarctic region into an overlapping mosaic of small areas, or windows is considered. Keeping the windows small reduces the correlation of bathymetry with other factors such as wind, sea temperature, and distance to the continent. We find that although the form of the model varies from window to window due to the changing role of other relevant environmental variables, we are left with a spatially consistent ordering of the relative importance of the topographic predictors. For a set of three representative days in the Austral winter of 1980, the analysis shows that an average of 54% of the spatial variation in sea ice concentration over the entire ice cover can be attributed to topographic variables. The results thus support the hypothesis that there is a sea ice to bottom bathymetry link. However this should not undermine the considerable influence of wind, current, and temperature which affect the ice distribution directly and are partly responsible for the observed bathymetric effects.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Kisi, Ozgur; Singh, Vijay P.
2017-02-01
Drought forecasting using standardized metrics of rainfall is a core task in hydrology and water resources management. Standardized Precipitation Index (SPI) is a rainfall-based metric that caters for different time-scales at which the drought occurs, and due to its standardization, is well-suited for forecasting drought at different periods in climatically diverse regions. This study advances drought modelling using multivariate adaptive regression splines (MARS), least square support vector machine (LSSVM), and M5Tree models by forecasting SPI in eastern Australia. MARS model incorporated rainfall as mandatory predictor with month (periodicity), Southern Oscillation Index, Pacific Decadal Oscillation Index and Indian Ocean Dipole, ENSO Modoki and Nino 3.0, 3.4 and 4.0 data added gradually. The performance was evaluated with root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (r2). Best MARS model required different input combinations, where rainfall, sea surface temperature and periodicity were used for all stations, but ENSO Modoki and Pacific Decadal Oscillation indices were not required for Bathurst, Collarenebri and Yamba, and the Southern Oscillation Index was not required for Collarenebri. Inclusion of periodicity increased the r2 value by 0.5-8.1% and reduced RMSE by 3.0-178.5%. Comparisons showed that MARS superseded the performance of the other counterparts for three out of five stations with lower MAE by 15.0-73.9% and 7.3-42.2%, respectively. For the other stations, M5Tree was better than MARS/LSSVM with lower MAE by 13.8-13.4% and 25.7-52.2%, respectively, and for Bathurst, LSSVM yielded more accurate result. For droughts identified by SPI ≤ - 0.5, accurate forecasts were attained by MARS/M5Tree for Bathurst, Yamba and Peak Hill, whereas for Collarenebri and Barraba, M5Tree was better than LSSVM/MARS. Seasonal analysis revealed disparate results where MARS/M5Tree was better than LSSVM. The results highlight the
Interactive natural image segmentation via spline regression.
Xiang, Shiming; Nie, Feiping; Zhang, Chunxia; Zhang, Changshui
2009-07-01
This paper presents an interactive algorithm for segmentation of natural images. The task is formulated as a problem of spline regression, in which the spline is derived in Sobolev space and has a form of a combination of linear and Green's functions. Besides its nonlinear representation capability, one advantage of this spline in usage is that, once it has been constructed, no parameters need to be tuned to data. We define this spline on the user specified foreground and background pixels, and solve its parameters (the combination coefficients of functions) from a group of linear equations. To speed up spline construction, K-means clustering algorithm is employed to cluster the user specified pixels. By taking the cluster centers as representatives, this spline can be easily constructed. The foreground object is finally cut out from its background via spline interpolation. The computational complexity of the proposed algorithm is linear in the number of the pixels to be segmented. Experiments on diverse natural images, with comparison to existing algorithms, illustrate the validity of our method.
NASA Astrophysics Data System (ADS)
Conoscenti, Christian; Ciaccio, Marilena; Caraballo-Arias, Nathalie Almaru; Gómez-Gutiérrez, Álvaro; Rotigliano, Edoardo; Agnesi, Valerio
2015-08-01
In this paper, terrain susceptibility to earth-flow occurrence was evaluated by using geographic information systems (GIS) and two statistical methods: Logistic regression (LR) and multivariate adaptive regression splines (MARS). LR has been already demonstrated to provide reliable predictions of earth-flow occurrence, whereas MARS, as far as we know, has never been used to generate earth-flow susceptibility models. The experiment was carried out in a basin of western Sicily (Italy), which extends for 51 km2 and is severely affected by earth-flows. In total, we mapped 1376 earth-flows, covering an area of 4.59 km2. To explore the effect of pre-failure topography on earth-flow spatial distribution, we performed a reconstruction of topography before the landslide occurrence. This was achieved by preparing a digital terrain model (DTM) where altitude of areas hosting landslides was interpolated from the adjacent undisturbed land surface by using the algorithm topo-to-raster. This DTM was exploited to extract 15 morphological and hydrological variables that, in addition to outcropping lithology, were employed as explanatory variables of earth-flow spatial distribution. The predictive skill of the earth-flow susceptibility models and the robustness of the procedure were tested by preparing five datasets, each including a different subset of landslides and stable areas. The accuracy of the predictive models was evaluated by drawing receiver operating characteristic (ROC) curves and by calculating the area under the ROC curve (AUC). The results demonstrate that the overall accuracy of LR and MARS earth-flow susceptibility models is from excellent to outstanding. However, AUC values of the validation datasets attest to a higher predictive power of MARS-models (AUC between 0.881 and 0.912) with respect to LR-models (AUC between 0.823 and 0.870). The adopted procedure proved to be resistant to overfitting and stable when changes of the learning and validation samples are
A Spline Regression Model for Latent Variables
ERIC Educational Resources Information Center
Harring, Jeffrey R.
2014-01-01
Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…
Marginal longitudinal semiparametric regression via penalized splines
Kadiri, M. Al; Carroll, R.J.; Wand, M.P.
2010-01-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models. PMID:21037941
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Parmar, Kulwinder Singh
2016-03-01
This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.
Butte, Nancy F.; Wong, William W.; Adolph, Anne L.; Puyau, Maurice R.; Vohra, Firoz A.; Zakeri, Issa F.
2010-01-01
Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant characteristics, heart rate (HR), and accelerometer counts (AC) for prediction of minute-by-minute EE, and hence 24-h total EE (TEE), against a 7-d doubly labeled water (DLW) method in children and adolescents. Our secondary aim was to demonstrate the utility of CSTS and MARS to predict awake EE, sleep EE, and activity EE (AEE) from 7-d HR and AC records, because these shorter periods are not verifiable by DLW, which provides an estimate of the individual's mean TEE over a 7-d interval. CSTS and MARS models were validated in 60 normal-weight and overweight participants (ages 5–18 y). The Actiheart monitor was used to simultaneously measure HR and AC. For prediction of TEE, mean absolute errors were 10.7 ± 307 kcal/d and 18.7 ± 252 kcal/d for CSTS and MARS models, respectively, relative to DLW. Corresponding root mean square error values were 305 and 251 kcal/d for CSTS and MARS models, respectively. Bland-Altman plots indicated that the predicted values were in good agreement with the DLW-derived TEE values. Validation of CSTS and MARS models based on participant characteristics, HR monitoring, and accelerometry for the prediction of minute-by-minute EE, and hence 24-h TEE, against the DLW method indicated no systematic bias and acceptable limits of agreement for pediatric groups and individuals under free-living conditions. PMID:20573939
Multivariate Adaptive Regression Splines (Preprint)
1990-08-01
situations, but as with the previous examples, the variance of the ratio (GCV/PSE) dominates this small bias. . 4.5. Portuguese Olive Oil . For this...example MARS is applied to data from analytical chemistry. The observations consist of 417 samples of olive oil from Portugal (Forina, et al., 1983). On...extent to which olive oil from northeastern Portugal (Dour0 Valley - 90 samples) differed from that of the rest of Portugal (327 samples). One way to
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.
2009-01-01
Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional
Semi-supervised classification via local spline regression.
Xiang, Shiming; Nie, Feiping; Zhang, Changshui
2010-11-01
This paper presents local spline regression for semi-supervised classification. The core idea in our approach is to introduce splines developed in Sobolev space to map the data points directly to be class labels. The spline is composed of polynomials and Green's functions. It is smooth, nonlinear, and able to interpolate the scattered data points with high accuracy. Specifically, in each neighborhood, an optimal spline is estimated via regularized least squares regression. With this spline, each of the neighboring data points is mapped to be a class label. Then, the regularized loss is evaluated and further formulated in terms of class label vector. Finally, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on the labeled and unlabeled data. To achieve the goal of semi-supervised classification, an objective function is constructed by combining together the global loss of the local spline regressions and the squared errors of the class labels of the labeled data. In this way, a transductive classification algorithm is developed in which a globally optimal classification can be finally obtained. In the semi-supervised learning setting, the proposed algorithm is analyzed and addressed into the Laplacian regularization framework. Comparative classification experiments on many public data sets and applications to interactive image segmentation and image matting illustrate the validity of our method.
Semisupervised feature selection via spline regression for video semantic recognition.
Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang
2015-02-01
To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.
Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines
NASA Astrophysics Data System (ADS)
Kuter, S.; Akyürek, Z.; Weber, G.-W.
2016-10-01
Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th
Simple adaptive cubic spline interpolation of fluorescence decay functions
NASA Astrophysics Data System (ADS)
Kuśba, J.; Czuper, A.
2007-05-01
Simple method allowing for adaptive cubic spline interpolation of fluorescence decay functions is proposed. In the first step of the method, the interpolated function is integrated using the known adaptive algorithm based on Newton-Cotes quadratures. It is shown that, in this step, application of the Simpson's rule provides the smallest number of calls of the interpolated function. In the second step of the method, a typical cubic spline approximation is used to find values of the interpolated function between the points evaluated in the first step.
[Application of spline-based Cox regression on analyzing data from follow-up studies].
Dong, Ying; Yu, Jin-ming; Hu, Da-yi
2012-09-01
With R, this study involved the application of the spline-based Cox regression to analyze data related to follow-up studies when the two basic assumptions of Cox proportional hazards regression were not satisfactory. Results showed that most of the continuous covariates contributed nonlinearly to mortality risk while the effects of three covariates were time-dependent. After considering multiple covariates in spline-based Cox regression, when the ankle brachial index (ABI) decreased by 0.1, the hazard ratio (HR) for all-cause death was 1.071. The spline-based Cox regression method could be applied to analyze the data related to follow-up studies when the assumptions of Cox proportional hazards regression were violated.
Variable Selection for Nonparametric Quantile Regression via Smoothing Spline AN OVA
Lin, Chen-Yen; Bondell, Howard; Zhang, Hao Helen; Zou, Hui
2014-01-01
Quantile regression provides a more thorough view of the effect of covariates on a response. Nonparametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, since important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline ANOVA models. The proposed sparse nonparametric quantile regression (SNQR) can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Supplementary materials for this article are available online. PMID:24554792
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S
2014-07-01
In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield.
Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression
NASA Astrophysics Data System (ADS)
Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.
2015-12-01
Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.
Zhou, Zhongxing; Zhu, Qingzhen; Zhao, Huijuan; Zhang, Lixin; Ma, Wenjuan; Gao, Feng
2014-04-01
To develop an effective curve-fitting algorithm with a regularization term for measuring the modulation transfer function (MTF) of digital radiographic imaging systems, in comparison with representative prior methods, a C-spline regression technique based upon the monotonicity and convex/concave shape restrictions of the edge spread function (ESF) was proposed for ESF estimation in this study. Two types of oversampling techniques and following four curve-fitting algorithms including the C-spline regression technique were considered for ESF estimation. A simulated edge image with a known MTF was used for accuracy determination of algorithms. Experimental edge images from two digital radiography systems were used for statistical evaluation of each curve-fitting algorithm on MTF measurements uncertainties. The simulation results show that the C-spline regression algorithm obtained a minimum MTF measurement error (an average error of 0.12% ± 0.11% and 0.18% ± 0.17% corresponding to two types of oversampling techniques, respectively, up to the cutoff frequency) among all curve-fitting algorithms. In the case of experimental edge images, the C-spline regression algorithm obtained the best uncertainty performance of MTF measurement among four curve-fitting algorithms for both the Pixarray-100 digital specimen radiography system and Hologic full-field digital mammography system. Comparisons among MTF estimates using four curve-fitting algorithms revealed that the proposed C-spline regression technique outperformed other algorithms on MTF measurements accuracy and uncertainty performance.
A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.
Li, Chin-Shang; Tu, Wanzhu
2007-05-01
In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.
Mookprom, S; Boonkum, W; Kunhareang, S; Siripanya, S; Duangjinda, M
2017-02-01
The objective of this research is to investigate appropriate random regression models with various covariance functions, for the genetic evaluation of test-day egg production. Data included 7,884 monthly egg production records from 657 Thai native chickens (Pradu Hang Dam) that were obtained during the first to sixth generation and were born during 2007 to 2014 at the Research and Development Network Center for Animal Breeding (Native Chickens), Khon Kaen University. Average annual and monthly egg productions were 117 ± 41 and 10.20 ± 6.40 eggs, respectively. Nine random regression models were analyzed using the Wilmink function (WM), Koops and Grossman function (KG), Legendre polynomials functions with second, third, and fourth orders (LG2, LG3, LG4), and spline functions with 4, 5, 6, and 8 knots (SP4, SP5, SP6, and SP8). All covariance functions were nested within the same additive genetic and permanent environmental random effects, and the variance components were estimated by Restricted Maximum Likelihood (REML). In model comparisons, mean square error (MSE) and the coefficient of detemination (R(2)) calculated the goodness of fit; and the correlation between observed and predicted values [Formula: see text] was used to calculate the cross-validated predictive abilities. We found that the covariance functions of SP5, SP6, and SP8 proved appropriate for the genetic evaluation of the egg production curves for Thai native chickens. The estimated heritability of monthly egg production ranged from 0.07 to 0.39, and the highest heritability was found during the first to third months of egg production. In conclusion, the spline functions within monthly egg production can be applied to breeding programs for the improvement of both egg number and persistence of egg production.
Adaptive solution of the biharmonic problem with shortly supported cubic spline-wavelets
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-09-01
In our contribution, we design a cubic spline-wavelet basis on the interval. The basis functions have small support and wavelets have vanishing moments. We show that stiffness matrices arising from discretization of the two-dimensional biharmonic problem using a constructed wavelet basis have uniformly bounded condition numbers and these condition numbers are very small. We compare quantitative behavior of adaptive wavelet method with a constructed basis and other cubic spline-wavelet bases, and show the superiority of our construction.
NASA Astrophysics Data System (ADS)
Susanti, D.; Hartini, E.; Permana, A.
2017-01-01
Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.
Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D; Szpiro, Adam A
2016-11-01
Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals.
Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling
NASA Astrophysics Data System (ADS)
Wu, Xiaofang; Shi, Jianghong
In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.
Algebraic grid adaptation method using non-uniform rational B-spline surface modeling
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, B. K.
1992-01-01
An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.
Spherical DCB-spline surfaces with hierarchical and adaptive knot insertion.
Cao, Juan; Li, Xin; Chen, Zhonggui; Qin, Hong
2012-08-01
This paper develops a novel surface fitting scheme for automatically reconstructing a genus-0 object into a continuous parametric spline surface. A key contribution for making such a fitting method both practical and accurate is our spherical generalization of the Delaunay configuration B-spline (DCB-spline), a new non-tensor-product spline. In this framework, we efficiently compute Delaunay configurations on sphere by the union of two planar Delaunay configurations. Also, we develop a hierarchical and adaptive method that progressively improves the fitting quality by new knot-insertion strategies guided by surface geometry and fitting error. Within our framework, a genus-0 model can be converted to a single spherical spline representation whose root mean square error is tightly bounded within a user-specified tolerance. The reconstructed continuous representation has many attractive properties such as global smoothness and no auxiliary knots. We conduct several experiments to demonstrate the efficacy of our new approach for reverse engineering and shape modeling.
Sánchez, J P; Misztal, I; Aguilar, I; Bertrand, J K
2008-02-01
The objective of this study was to examine the feasibility of using random regression-spline (RR-spline) models for fitting growth traits in a multibreed beef cattle population. To meet the objective, the results from the RR-spline model were compared with the widely used multitrait (MT) model when both were fit to a data set (1.8 million records and 1.1 million animals) provided by the American Gelbvieh Association. The effect of prior information on the EBV of sires was also investigated. In both RR-spline and MT models, the following effects were considered: individual direct and maternal additive genetic effects, contemporary group, age of the animal at measurement, direct and maternal heterosis, and direct and maternal additive genetic mean effect of the breed. Additionally, the RR-spline model included an individual direct permanent environmental effect. When both MT and RR-spline models were applied to a data set containing records for weaning weight (WWT) and yearling weight (YWT) within specified age ranges, the rankings of bulls' direct EBV (as measured via Pearson correlations) provided by both models were comparable, with slightly greater differences in the reranking of bulls observed for YWT evaluations (>or=0.99 for BWT and WWT and >or=0.98 for YWT); also, some bulls dropped from the top 100 list when these lists were compared across methods. For maternal effects, the estimated correlations were slightly smaller, particularly for YWT; again, some drops from the top 100 animals were observed. As in regular MT multibreed genetic evaluations, the heterosis effects and the additive genetic effects of the breed could not be estimated from field data, because there were not enough contemporary groups with the proper composition of purebred and crossbred animals; thus, prior information based on literature values had to be included. The inclusion of prior information had a negligible effect in the overall ranking for bulls with greater than 20 birth weight
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
1993-01-01
A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.
Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors
Woodard, Dawn B.; Crainiceanu, Ciprian; Ruppert, David
2013-01-01
We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials. PMID:24293988
Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G
2008-09-01
A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.
Huo, Yuankai; Aboud, Katherine; Kang, Hakmook; Cutting, Laurie E; Landman, Bennett A
2016-10-01
Understanding brain volumetry is essential to understand neurodevelopment and disease. Historically, age-related changes have been studied in detail for specific age ranges (e.g., early childhood, teen, young adults, elderly, etc.) or more sparsely sampled for wider considerations of lifetime aging. Recent advancements in data sharing and robust processing have made available considerable quantities of brain images from normal, healthy volunteers. However, existing analysis approaches have had difficulty addressing (1) complex volumetric developments on the large cohort across the life time (e.g., beyond cubic age trends), (2) accounting for confound effects, and (3) maintaining an analysis framework consistent with the general linear model (GLM) approach pervasive in neuroscience. To address these challenges, we propose to use covariate-adjusted restricted cubic spline (C-RCS) regression within a multi-site cross-sectional framework. This model allows for flexible consideration of non-linear age-associated patterns while accounting for traditional covariates and interaction effects. As a demonstration of this approach on lifetime brain aging, we derive normative volumetric trajectories and 95% confidence intervals from 5111 healthy patients from 64 sites while accounting for confounding sex, intracranial volume and field strength effects. The volumetric results are shown to be consistent with traditional studies that have explored more limited age ranges using single-site analyses. This work represents the first integration of C-RCS with neuroimaging and the derivation of structural covariance networks (SCNs) from a large study of multi-site, cross-sectional data.
Alhotan, R A; Vedenov, D V; Pesti, G M
2016-10-04
The use of non-linear regression models in the analysis of biological data has led to advances in poultry nutrition. Spline or broken-line nonlinear regression models are commonly used to estimate nutritional requirements. One particular application of broken-line models is estimating the maximum safe level (MSL) of feed ingredients beyond which the ingredients become toxic, resulting in reduced performance. The objectives of this study were to evaluate the effectiveness of broken-line models (broken-line linear or BLL; and broken-line quadratic or BLQ) in estimating the MSL; to identify the most efficient design of feeding trials by finding the optimal number of ingredient levels and replications; and to re-estimate the MSL of various test ingredients reported in the nutrition literature for comparison purposes. The Maximum Ingredient level Optimization Workbook (MIOW) was developed to simulate a series of experiments and estimate the MSL and the corresponding descriptive statistics (SD, SE, CI, and R(2)). The results showed that the broken-line models provided good estimates of the MSL (small SE and high R(2)) with the BLL model producing higher MSL values as compared to the BLQ model. Increasing the number of experimental replications or ingredient levels (independently of each other) reduced the SE of the MSL with diminishing returns. The SE of the MSL was reduced with increasing the size (total pens) of the simulated experiments by increasing either the number of replications or levels or both. The evaluation of MSLs reported in the existing literature revealed that the multiple range procedure used to determine the MSL in several reports can both overestimate and underestimate the MSL compared to the results obtained by the broken-line models. The results suggest that the broken-line linear models can be used in lieu of the multiple range test to estimate the MSL of feed ingredients along with the corresponding descriptive statistics, such as the SE of the
Inference in Adaptive Regression via the Kac-Rice Formula
2014-05-15
Inference in Adaptive Regression via the Kac- Rice Formula Jonathan Taylor∗, Joshua Loftus, Ryan J. Tibshirani Department of Statistics Stanford...general adaptive regression setting. Our approach uses the Kac- Rice formula (as described in Adler & Taylor 2007) applied to the problem of maximizing a...SUBTITLE Inference in Adaptive Regression via the Kac- Rice Formula 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline
NASA Astrophysics Data System (ADS)
Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong
2015-11-01
The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.
Brown, C; Adcock, A; Azevedo, S; Liebman, J; Bond, E
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.
Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa
2013-01-01
Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796
NASA Astrophysics Data System (ADS)
Brown, Charles G., Jr.; Adcock, Aaron B.; Azevedo, Stephen G.; Liebman, Judith A.; Bond, Essex J.
2011-03-01
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, timevarying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.
Johnson, Richard Wayne
2003-05-01
The application of collocation methods using spline basis functions to solve differential model equations has been in use for a few decades. However, the application of spline collocation to the solution of the nonlinear, coupled, partial differential equations (in primitive variables) that define the motion of fluids has only recently received much attention. The issues that affect the effectiveness and accuracy of B-spline collocation for solving differential equations include which points to use for collocation, what degree B-spline to use and what level of continuity to maintain. Success using higher degree B-spline curves having higher continuity at the knots, as opposed to more traditional approaches using orthogonal collocation, have recently been investigated along with collocation at the Greville points for linear (1D) and rectangular (2D) geometries. The development of automatic knot insertion techniques to provide sufficient accuracy for B-spline collocation has been underway. The present article reviews recent progress for the application of B-spline collocation to fluid motion equations as well as new work in developing a novel adaptive knot insertion algorithm for a 1D convection-diffusion model equation.
NASA Astrophysics Data System (ADS)
Zhang, X.; Liang, S.; Wang, G.
2015-12-01
Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.
Hypnotizability as a Function of Repression, Adaptive Regression, and Mood
ERIC Educational Resources Information Center
Silver, Maurice Joseph
1974-01-01
Forty male undergraduates were assessed in a personality assessment session and a hypnosis session. The personality traits studied were repressive style and adaptive regression, while the transitory variable was mood prior to hypnosis. Hypnotizability was a significant interactive function of repressive style and mood, but not of adaptive…
Adaptive support vector regression for UAV flight control.
Shin, Jongho; Jin Kim, H; Kim, Youdan
2011-01-01
This paper explores an application of support vector regression for adaptive control of an unmanned aerial vehicle (UAV). Unlike neural networks, support vector regression (SVR) generates global solutions, because SVR basically solves quadratic programming (QP) problems. With this advantage, the input-output feedback-linearized inverse dynamic model and the compensation term for the inversion error are identified off-line, which we call I-SVR (inversion SVR) and C-SVR (compensation SVR), respectively. In order to compensate for the inversion error and the unexpected uncertainty, an online adaptation algorithm for the C-SVR is proposed. Then, the stability of the overall error dynamics is analyzed by the uniformly ultimately bounded property in the nonlinear system theory. In order to validate the effectiveness of the proposed adaptive controller, numerical simulations are performed on the UAV model.
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...
Michna, Agata; Braselmann, Herbert; Selmansberger, Martin; Dietz, Anne; Hess, Julia; Gomolka, Maria; Hornhardt, Sabine; Blüthgen, Nils; Zitzelsberger, Horst; Unger, Kristian
2016-01-01
Gene expression time-course experiments allow to study the dynamics of transcriptomic changes in cells exposed to different stimuli. However, most approaches for the reconstruction of gene association networks (GANs) do not propose prior-selection approaches tailored to time-course transcriptome data. Here, we present a workflow for the identification of GANs from time-course data using prior selection of genes differentially expressed over time identified by natural cubic spline regression modeling (NCSRM). The workflow comprises three major steps: 1) the identification of differentially expressed genes from time-course expression data by employing NCSRM, 2) the use of regularized dynamic partial correlation as implemented in GeneNet to infer GANs from differentially expressed genes and 3) the identification and functional characterization of the key nodes in the reconstructed networks. The approach was applied on a time-resolved transcriptome data set of radiation-perturbed cell culture models of non-tumor cells with normal and increased radiation sensitivity. NCSRM detected significantly more genes than another commonly used method for time-course transcriptome analysis (BETR). While most genes detected with BETR were also detected with NCSRM the false-detection rate of NCSRM was low (3%). The GANs reconstructed from genes detected with NCSRM showed a better overlap with the interactome network Reactome compared to GANs derived from BETR detected genes. After exposure to 1 Gy the normal sensitive cells showed only sparse response compared to cells with increased sensitivity, which exhibited a strong response mainly of genes related to the senescence pathway. After exposure to 10 Gy the response of the normal sensitive cells was mainly associated with senescence and that of cells with increased sensitivity with apoptosis. We discuss these results in a clinical context and underline the impact of senescence-associated pathways in acute radiation response of normal
Michna, Agata; Braselmann, Herbert; Selmansberger, Martin; Dietz, Anne; Hess, Julia; Gomolka, Maria; Hornhardt, Sabine; Blüthgen, Nils; Zitzelsberger, Horst; Unger, Kristian
2016-01-01
Gene expression time-course experiments allow to study the dynamics of transcriptomic changes in cells exposed to different stimuli. However, most approaches for the reconstruction of gene association networks (GANs) do not propose prior-selection approaches tailored to time-course transcriptome data. Here, we present a workflow for the identification of GANs from time-course data using prior selection of genes differentially expressed over time identified by natural cubic spline regression modeling (NCSRM). The workflow comprises three major steps: 1) the identification of differentially expressed genes from time-course expression data by employing NCSRM, 2) the use of regularized dynamic partial correlation as implemented in GeneNet to infer GANs from differentially expressed genes and 3) the identification and functional characterization of the key nodes in the reconstructed networks. The approach was applied on a time-resolved transcriptome data set of radiation-perturbed cell culture models of non-tumor cells with normal and increased radiation sensitivity. NCSRM detected significantly more genes than another commonly used method for time-course transcriptome analysis (BETR). While most genes detected with BETR were also detected with NCSRM the false-detection rate of NCSRM was low (3%). The GANs reconstructed from genes detected with NCSRM showed a better overlap with the interactome network Reactome compared to GANs derived from BETR detected genes. After exposure to 1 Gy the normal sensitive cells showed only sparse response compared to cells with increased sensitivity, which exhibited a strong response mainly of genes related to the senescence pathway. After exposure to 10 Gy the response of the normal sensitive cells was mainly associated with senescence and that of cells with increased sensitivity with apoptosis. We discuss these results in a clinical context and underline the impact of senescence-associated pathways in acute radiation response of normal
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
Adaptive Modeling: An Approach for Incorporating Nonlinearity in Regression Analyses.
Knafl, George J; Barakat, Lamia P; Hanlon, Alexandra L; Hardie, Thomas; Knafl, Kathleen A; Li, Yimei; Deatrick, Janet A
2017-02-01
Although regression relationships commonly are treated as linear, this often is not the case. An adaptive approach is described for identifying nonlinear relationships based on power transforms of predictor (or independent) variables and for assessing whether or not relationships are distinctly nonlinear. It is also possible to model adaptively both means and variances of continuous outcome (or dependent) variables and to adaptively power transform positive-valued continuous outcomes, along with their predictors. Example analyses are provided of data from parents in a nursing study on emotional-health-related quality of life for childhood brain tumor survivors as a function of the effort to manage the survivors' condition. These analyses demonstrate that relationships, including moderation relationships, can be distinctly nonlinear, that conclusions about means can be affected by accounting for non-constant variances, and that outcome transformation along with predictor transformation can provide distinct improvements and can resolve skewness problems.© 2017 Wiley Periodicals, Inc.
Dissociating conflict adaptation from feature integration: a multiple regression approach.
Notebaert, Wim; Verguts, Tom
2007-10-01
Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on feature repetition and/or integration effects (e.g., B. Hommel, R. W. Proctor, & K.-P. Vu, 2004; U. Mayr, E. Awh, & P. Laurey, 2003). Previous attempts to dissociate feature integration from conflict adaptation focused on a particular subset of the data in which feature transitions were held constant (J. G. Kerns et al., 2004) or in which congruency transitions were held constant (C. Akcay & E. Hazeltine, in press), but this has a number of disadvantages. In this article, the authors present a multiple regression solution for this problem and discuss its possibilities and pitfalls.
Adaptive local linear regression with application to printer color management.
Gupta, Maya R; Garcia, Eric K; Chin, Erika
2008-06-01
Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global "optimal" value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing k-NN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in error metrics are shown, indicating that enclosing neighborhoods may be a promising adaptive neighborhood definition for other local learning tasks as well, depending on the density of training samples.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Regression Splines with Longitudinal Data
Technology Transfer Automated Retrieval System (TEKTRAN)
In many clinical trial studies, patients are observed and/or measured on multiple occasions. To account for the longitudinal nature of the data, a mixed model analysis implemented using SAS PROC MIXED is commonly used. It is typical to make comparisons between dose or treatment groups, possibly cont...
NASA Technical Reports Server (NTRS)
Zhang, Zhimin; Tomlinson, John; Martin, Clyde
1994-01-01
In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.
Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael
2000-04-11
A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy search algorithm to find
NASA Astrophysics Data System (ADS)
Ng, Pin T.; Maechler, Martin
2015-05-01
COBS (COnstrained B-Splines), written in R, creates constrained regression smoothing splines via linear programming and sparse matrices. The method has two important features: the number and location of knots for the spline fit are established using the likelihood-based Akaike Information Criterion (rather than a heuristic procedure); and fits can be made for quantiles (e.g. 25% and 75% as well as the usual 50%) in the response variable, which is valuable when the scatter is asymmetrical or non-Gaussian. This code is useful for, for example, estimating cluster ages when there is a wide spread in stellar ages at a chosen absorption, as a standard regression line does not give an effective measure of this relationship.
Hashem, Seyed Yashar Bani; Zin, Nor Azan Mat; Yatim, Noor Faezah Mohd; Ibrahim, Norlinah Mohamed
2014-01-01
Many input devices are available for interacting with computers, but the computer mouse is still the most popular device for interaction. People who suffer from involuntary tremor have difficulty using the mouse in the normal way. The target participants of this research were individuals who suffer from Parkinson's disease. Tremor in limbs makes accurate mouse movements impossible or difficult without any assistive technologies to help. This study explores a new assistive technique-adaptive path smoothing via B-spline (APSS)-to enhance mouse controlling based on user's tremor level and type. APSS uses Mean filtering and B-spline to provide a smoothed mouse trajectory. Seven participants who have unwanted tremor evaluated APSS. Results show that APSS is very promising and greatly increases their control of the computer mouse. Result of user acceptance test also shows that user perceived APSS as easy to use. They also believe it to be a useful tool and intend to use it once it is available. Future studies could explore the possibility of integrating APSS with one assistive pointing technique, such as the Bubble cursor or the Sticky target technique, to provide an all in one solution for motor disabled users.
Adaptation of a weighted regression approach to evaluate water quality trends in anestuary
To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...
Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary
To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...
Keith, Scott W; Allison, David B
2014-09-29
This paper details the design, evaluation, and implementation of a framework for detecting and modeling nonlinearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and nonparametric bootstrapping. Unlike other nonlinear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI; kg/m(2)) and the complex multi-stage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic nonlinear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and nonlinearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz's Bayesian Information Criterion (BIC) and Akaike's Information Criterion (AIC), in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions.
Keith, Scott W.; Allison, David B.
2014-01-01
This paper details the design, evaluation, and implementation of a framework for detecting and modeling non-linearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and non-parametric bootstrapping. Unlike other non-linear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI, kg/m2) and the complex multistage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic non-linear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and non-linearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz’s Bayesian Information Criterion (BIC) and Akaike’s Information Criterion (AIC), in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions. PMID:25610831
Technology Transfer Automated Retrieval System (TEKTRAN)
Free-living measurements of 24-h total energy expenditure (TEE) and activity energy expenditure (AEE) are required to better understand the metabolic, physiological, behavioral, and environmental factors affecting energy balance and contributing to the global epidemic of childhood obesity. The spec...
Pax6 in Collembola: Adaptive Evolution of Eye Regression
Hou, Ya-Nan; Li, Sheng; Luan, Yun-Xia
2016-01-01
Unlike the compound eyes in insects, collembolan eyes are comparatively simple: some species have eyes with different numbers of ocelli (1 + 1 to 8 + 8), and some species have no apparent eye structures. Pax6 is a universal master control gene for eye morphogenesis. In this study, full-length Pax6 cDNAs, Fc-Pax6 and Cd-Pax6, were cloned from an eyeless collembolan (Folsomia candida, soil-dwelling) and an eyed one (Ceratophysella denticulata, surface-dwelling), respectively. Their phylogenetic positions are between the two Pax6 paralogs in insects, eyeless (ey) and twin of eyeless (toy), and their protein sequences are more similar to Ey than to Toy. Both Fc-Pax6 and Cd-Pax6 could induce ectopic eyes in Drosophila, while Fc-Pax6 exhibited much weaker transactivation ability than Cd-Pax6. The C-terminus of collembolan Pax6 is indispensable for its transactivation ability, and determines the differences of transactivation ability between Fc-Pax6 and Cd-Pax6. One of the possible reasons is that Fc-Pax6 accumulated more mutations at some key functional sites of C-terminus under a lower selection pressure on eye development due to the dark habitats of F. candida. The composite data provide a first molecular evidence for the monophyletic origin of collembolan eyes, and indicate the eye degeneration of collembolans is caused by adaptive evolution. PMID:26856893
Pax6 in Collembola: Adaptive Evolution of Eye Regression.
Hou, Ya-Nan; Li, Sheng; Luan, Yun-Xia
2016-02-09
Unlike the compound eyes in insects, collembolan eyes are comparatively simple: some species have eyes with different numbers of ocelli (1 + 1 to 8 + 8), and some species have no apparent eye structures. Pax6 is a universal master control gene for eye morphogenesis. In this study, full-length Pax6 cDNAs, Fc-Pax6 and Cd-Pax6, were cloned from an eyeless collembolan (Folsomia candida, soil-dwelling) and an eyed one (Ceratophysella denticulata, surface-dwelling), respectively. Their phylogenetic positions are between the two Pax6 paralogs in insects, eyeless (ey) and twin of eyeless (toy), and their protein sequences are more similar to Ey than to Toy. Both Fc-Pax6 and Cd-Pax6 could induce ectopic eyes in Drosophila, while Fc-Pax6 exhibited much weaker transactivation ability than Cd-Pax6. The C-terminus of collembolan Pax6 is indispensable for its transactivation ability, and determines the differences of transactivation ability between Fc-Pax6 and Cd-Pax6. One of the possible reasons is that Fc-Pax6 accumulated more mutations at some key functional sites of C-terminus under a lower selection pressure on eye development due to the dark habitats of F. candida. The composite data provide a first molecular evidence for the monophyletic origin of collembolan eyes, and indicate the eye degeneration of collembolans is caused by adaptive evolution.
I-spline Smoothing for Calibrating Predictive Models.
Wu, Yuan; Jiang, Xiaoqian; Kim, Jihoon; Ohno-Machado, Lucila
2012-01-01
We proposed the I-spline Smoothing approach for calibrating predictive models by solving a nonlinear monotone regression problem. We took advantage of I-spline properties to obtain globally optimal solutions while keeping the computational cost low. Numerical studies based on three data sets showed the empirical evidences of I-spline Smoothing in improving calibration (i.e.,1.6x, 1.4x, and 1.4x on the three datasets compared to the average of competitors-Binning, Platt Scaling, Isotonic Regression, Monotone Spline Smoothing, Smooth Isotonic Regression) without deterioration of discrimination.
Shape Preserving Spline Interpolation
NASA Technical Reports Server (NTRS)
Gregory, J. A.
1985-01-01
A rational spline solution to the problem of shape preserving interpolation is discussed. The rational spline is represented in terms of first derivative values at the knots and provides an alternative to the spline-under-tension. The idea of making the shape control parameters dependent on the first derivative unknowns is then explored. The monotonic or convex shape of the interpolation data can then be preserved automatically through the solution of the resulting non-linear consistency equations of the spline.
McCartin, B.J.
1996-12-31
Herein, we discuss a generalization of the semiclassical cubic spline known in the literature as the exponential spline. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain {open_quotes}tension{close_quotes} parameters. We first outline the theoretical underpinnings of the exponential spline. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. We next discuss the numerical computation of the exponential spline. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines. We conclude with a consideration of the broad spectrum of possible uses of exponential splines in the applications. Our primary emphasis is on computational fluid dynamics although the imaginative reader will recognize the wider generality of the techniques developed.
Riaz, Nadeem; Shanker, Piyush; Wiersma, Rodney; Gudmundsson, Olafur; Mao, Weihua; Widrow, Bernard; Xing, Lei
2009-10-07
Intra-fraction tumor tracking methods can improve radiation delivery during radiotherapy sessions. Image acquisition for tumor tracking and subsequent adjustment of the treatment beam with gating or beam tracking introduces time latency and necessitates predicting the future position of the tumor. This study evaluates the use of multi-dimensional linear adaptive filters and support vector regression to predict the motion of lung tumors tracked at 30 Hz. We expand on the prior work of other groups who have looked at adaptive filters by using a general framework of a multiple-input single-output (MISO) adaptive system that uses multiple correlated signals to predict the motion of a tumor. We compare the performance of these two novel methods to conventional methods like linear regression and single-input, single-output adaptive filters. At 400 ms latency the average root-mean-square-errors (RMSEs) for the 14 treatment sessions studied using no prediction, linear regression, single-output adaptive filter, MISO and support vector regression are 2.58, 1.60, 1.58, 1.71 and 1.26 mm, respectively. At 1 s, the RMSEs are 4.40, 2.61, 3.34, 2.66 and 1.93 mm, respectively. We find that support vector regression most accurately predicts the future tumor position of the methods studied and can provide a RMSE of less than 2 mm at 1 s latency. Also, a multi-dimensional adaptive filter framework provides improved performance over single-dimension adaptive filters. Work is underway to combine these two frameworks to improve performance.
Indhumathi, C; Cai, Y Y; Guan, Y Q; Opas, M; Zheng, J
2012-01-01
Confocal laser scanning microscopy has become a most powerful tool to visualize and analyze the dynamic behavior of cellular molecules. Photobleaching of fluorochromes is a major problem with confocal image acquisition that will lead to intensity attenuation. Photobleaching effect can be reduced by optimizing the collection efficiency of the confocal image by fast z-scanning. However, such images suffer from distortions, particularly in the z dimension, which causes disparities in the x, y, and z directions of the voxels with the original image stacks. As a result, reliable segmentation and feature extraction of these images may be difficult or even impossible. Image interpolation is especially needed for the correction of undersampling artifact in the axial plane of three-dimensional images generated by a confocal microscope to obtain cubic voxels. In this work, we present an adaptive cubic B-spline-based interpolation with the aid of lookup tables by deriving adaptive weights based on local gradients for the sampling nodes in the interpolation formulae. Thus, the proposed method enhances the axial resolution of confocal images by improving the accuracy of the interpolated value simultaneously with great reduction in computational cost. Numerical experimental results confirm the effectiveness of the proposed interpolation approach and demonstrate its superiority both in terms of accuracy and speed compared to other interpolation algorithms.
Algamal, Zakariya Yahya; Lee, Muhammad Hisyam
2015-12-01
Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.
NASA Astrophysics Data System (ADS)
Vranish, John M.
1993-06-01
A captured nut member is located within a tool interface assembly and being actuated by a spline screw member driven by a robot end effector. The nut member lowers and rises depending upon the directional rotation of the coupling assembly. The captured nut member further includes two winged segments which project outwardly in diametrically opposite directions so as to engage and disengage a clamping surface in the form of a chamfered notch respectively provided on the upper surface of a pair of parallel forwardly extending arm members of a bifurcated tool stowage holster which is adapted to hold and store a robotic tool including its end effector interface when not in use. A forward and backward motion of the robot end effector operates to insert and remove the tool from the holster.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
NASA Technical Reports Server (NTRS)
Schiess, James R.; Kerr, Patricia A.; Smith, Olivia C.
1988-01-01
Smooth curves drawn among plotted data easily. Rational-Spline Approximation with Automatic Tension Adjustment algorithm leads to flexible, smooth representation of experimental data. "Tension" denotes mathematical analog of mechanical tension in spline or other mechanical curve-fitting tool, and "spline" as denotes mathematical generalization of tool. Program differs from usual spline under tension, allows user to specify different values of tension between adjacent pairs of knots rather than constant tension over entire range of data. Subroutines use automatic adjustment scheme that varies tension parameter for each interval until maximum deviation of spline from line joining knots less than or equal to amount specified by user. Procedure frees user from drudgery of adjusting individual tension parameters while still giving control over local behavior of spline.
Guo, Yi; Errichello, Robert
2013-08-29
An analytical model is developed to evaluate the design of a spline coupling. For a given torque and shaft misalignment, the model calculates the number of teeth in contact, tooth loads, stiffnesses, stresses, and safety factors. The analytic model provides essential spline coupling design and modeling information and could be easily integrated into gearbox design and simulation tools.
Shi, Ming; Shen, Weiming; Wang, Hong-Qiang; Chong, Yanwen
2016-12-01
Inferring gene regulatory networks (GRNs) from microarray expression data are an important but challenging issue in systems biology. In this study, the authors propose a Bayesian information criterion (BIC)-guided sparse regression approach for GRN reconstruction. This approach can adaptively model GRNs by optimising the l1-norm regularisation of sparse regression based on a modified version of BIC. The use of the regularisation strategy ensures the inferred GRNs to be as sparse as natural, while the modified BIC allows incorporating prior knowledge on expression regulation and thus avoids the overestimation of expression regulators as usual. Especially, the proposed method provides a clear interpretation of combinatorial regulations of gene expression by optimally extracting regulation coordination for a given target gene. Experimental results on both simulation data and real-world microarray data demonstrate the competent performance of discovering regulatory relationships in GRN reconstruction.
A multiresolution analysis for tensor-product splines using weighted spline wavelets
NASA Astrophysics Data System (ADS)
Kapl, Mario; Jüttler, Bert
2009-09-01
We construct biorthogonal spline wavelets for periodic splines which extend the notion of "lazy" wavelets for linear functions (where the wavelets are simply a subset of the scaling functions) to splines of higher degree. We then use the lifting scheme in order to improve the approximation properties with respect to a norm induced by a weighted inner product with a piecewise constant weight function. Using the lifted wavelets we define a multiresolution analysis of tensor-product spline functions and apply it to image compression of black-and-white images. By performing-as a model problem-image compression with black-and-white images, we demonstrate that the use of a weight function allows to adapt the norm to the specific problem.
NASA Astrophysics Data System (ADS)
Strichartz, Robert S.; Usher, Michael
2000-09-01
A general theory of piecewise multiharmonic splines is constructed for a class of fractals (post-critically finite) that includes the familiar Sierpinski gasket, based on Kigami's theory of Laplacians on these fractals. The spline spaces are the analogues of the spaces of piecewise Cj polynomials of degree 2j + 1 on an interval, with nodes at dyadic rational points. We give explicit algorithms for effectively computing multiharmonic functions (solutions of [Delta]j+1u = 0) and for constructing bases for the spline spaces (for general fractals we need to assume that j is odd), and also for computing inner products of these functions. This enables us to give a finite element method for the approximate solution of fractal differential equations. We give the analogue of Simpson's method for numerical integration on the Sierpinski gasket. We use splines to approximate functions vanishing on the boundary by functions vanishing in a neighbourhood of the boundary.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
An adaptive online learning approach for Support Vector Regression: Online-SVR-FID
NASA Astrophysics Data System (ADS)
Liu, Jie; Zio, Enrico
2016-08-01
Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.
Smoothing spline primordial power spectrum reconstruction
Sealfon, Carolyn; Verde, Licia; Jimenez, Raul
2005-11-15
We reconstruct the shape of the primordial power spectrum (PPS) using a smoothing spline. Our adapted smoothing spline technique provides a complementary method to existing efforts to search for smooth features in the PPS, such as a running spectral index. With this technique we find no significant indication with Wilkinson Microwave Anisotropy Probe first-year data that the PPS deviates from a Harrison-Zeldovich spectrum and no evidence for loss of power on large scales. We also examine the effect on the cosmological parameters of the additional PPS freedom. Smooth variations in the PPS are not significantly degenerate with other cosmological parameters, but the spline reconstruction greatly increases the errors on the optical depth and baryon fraction.
Clinical Trials: Spline Modeling is Wonderful for Nonlinear Effects.
Cleophas, Ton J
2016-01-01
Traditionally, nonlinear relationships like the smooth shapes of airplanes, boats, and motor cars were constructed from scale models using stretched thin wooden strips, otherwise called splines. In the past decades, mechanical spline methods have been replaced with their mathematical counterparts. The objective of the study was to study whether spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial and also to study whether it can detect patterns in a trial that are relevant but go unobserved with simpler regression models. A clinical trial assessing the effect of quantity of care on quality of care was used as an example. Spline curves consistent of 4 or 5 cubic functions were applied. SPSS statistical software was used for analysis. The spline curves of our data outperformed the traditional curves because (1) unlike the traditional curves, they did not miss the top quality of care given in either subgroup, (2) unlike the traditional curves, they, rightly, did not produce sinusoidal patterns, and (3) unlike the traditional curves, they provided a virtually 100% match of the original values. We conclude that (1) spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial; (2) spline modeling can detect patterns in a trial that are relevant but may go unobserved with simpler regression models; (3) in clinical research, spline modeling has great potential given the presence of many nonlinear effects in this field of research and given its sophisticated mathematical refinement to fit any nonlinear effect in the mostly accurate way; and (4) spline modeling should enable to improve making predictions from clinical research for the benefit of health decisions and health care. We hope that this brief introduction to spline modeling will stimulate clinical investigators to start using this wonderful method.
NASA Astrophysics Data System (ADS)
Chelariu, Romeu; Suditu, Gabriel Dan; Mareci, Daniel; Bolat, Georgiana; Cimpoesu, Nicanor; Leon, Florin; Curteanu, Silvia
2015-04-01
The aim of this study is to investigate the electrochemical behavior of some dental metallic materials in artificial saliva for different pH (5.6 and 3.4), NaF content (500 ppm, 1000 ppm, and 2000 ppm), and with albumin protein addition (0.6 wt.%) for pH 3.4. The corrosion resistance of the alloys was quantitatively evaluated by polarization resistance, estimated by electrochemical impedance spectroscopy method. An adaptive k-nearest-neighbor regression method was applied for evaluating the corrosion resistance of the alloys by simulation, depending on the operation conditions. The predictions provided by the model are useful for experimental practice, as they can replace or, at least, help to plan the experiments. The accurate results obtained prove that the developed model is reliable and efficient.
NASA Astrophysics Data System (ADS)
Tai, Shen-Chuan; Chen, Peng-Yu; Chao, Chian-Yen
2016-07-01
The Consultative Committee for Space Data Systems proposed an efficient image compression standard that can do lossless compression (CCSDS-ICS). CCSDS-ICS is the most widely utilized standard for satellite communications. However, the original CCSDS-ICS is weak in terms of error resilience with even a single incorrect bit possibly causing numerous missing pixels. A restoration algorithm based on the neighborhood similar pixel interpolator is proposed to fill in missing pixels. The linear regression model is used to generate the reference image from other panchromatic or multispectral images. Furthermore, an adaptive search window is utilized to sieve out similar pixels from the pixels in the search region defined in the neighborhood similar pixel interpolator. The experimental results show that the proposed methods are capable of reconstructing missing regions with good visual quality.
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1994-01-01
Scientific data often contains random errors that make plotting and curve-fitting difficult. The Rational-Spline Approximation with Automatic Tension Adjustment algorithm lead to a flexible, smooth representation of experimental data. The user sets the conditions for each consecutive pair of knots:(knots are user-defined divisions in the data set) to apply no tension; to apply fixed tension; or to determine tension with a tension adjustment algorithm. The user also selects the number of knots, the knot abscissas, and the allowed maximum deviations from line segments. The selection of these quantities depends on the actual data and on the requirements of a particular application. This program differs from the usual spline under tension in that it allows the user to specify different tension values between each adjacent pair of knots rather than a constant tension over the entire data range. The subroutines use an automatic adjustment scheme that varies the tension parameter for each interval until the maximum deviation of the spline from the line joining the knots is less than or equal to a user-specified amount. This procedure frees the user from the drudgery of adjusting individual tension parameters while still giving control over the local behavior of the spline The Rational Spline program was written completely in FORTRAN for implementation on a CYBER 850 operating under NOS. It has a central memory requirement of approximately 1500 words. The program was released in 1988.
NASA Astrophysics Data System (ADS)
Börger, Klaus; Schmidt, Michael; Dettmering, Denise; Limberger, Marco; Erdogan, Eren; Seitz, Florian; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte; Mrotzek, Niclas
2016-04-01
Today, the observations of space geodetic techniques are usually available with a rather low latency which applies to space missions observing the solar terrestrial environment, too. Therefore, we can use all these measurements in near real-time to compute and to provide ionosphere information, e.g. the vertical total electron content (VTEC). GSSAC and BGIC support a project aiming at a service for providing ionosphere information. This project is called OPTIMAP, meaning "Operational Tool for Ionosphere Mapping and Prediction"; the scientific work is mainly done by the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics of the University of Goettingen (IAG). The OPTIMAP strategy for providing ionosphere target quantities of high quality, such as VTEC or the electron density, includes mathematical approaches and tools allowing for the model adaptation to the real observational scenario as a significant improvement w.r.t. the traditional well-established methods. For example, OPTIMAP combines different observation types such as GNSS (GPS, GLONASS), Satellite Altimetry (Jason-2), DORIS as well as radio-occultation measurements (FORMOSAT#3/COSMIC). All these observations run into a Kalman-filter to compute global ionosphere maps, i.e. VTEC, for the current instant of time and as a forecast for a couple of subsequent days. Mathematically, the global VTEC is set up as a series expansion in terms of two-dimensional basis functions defined as tensor products of trigonometric B-splines for longitude and polynomial B-splines for latitude. Compared to the classical spherical harmonics, B-splines have a localizing character and, therefore, can handle an inhomogeneous data distribution properly. Finally, B-splines enable a so-called multi-resolution-representation (MRR) enabling the combination of global and regional modelling approaches. In addition to the geodetic measurements, Sun observations are pre
1981-05-01
try todefine a complex planar spline by holomorphic elements like polynomials, then by the well known identity theorem (e.g. Diederich- Remmert [9, p...R. Remmert : Funktionentheorie I, Springer, Berlin, Heidelberg, New York, 1972, 246 p. 10 0. Lehto - K.I. Virtanen: Quasikonforme AbbildunQen, Springer
NASA Astrophysics Data System (ADS)
Zheng, Zhihui; Gao, Lei; Xiao, Liping; Zhou, Bin; Gao, Shibo
2015-12-01
Our purpose is to develop a detection algorithm capable of searching for generic interest objects in real time without large training sets and long-time training stages. Instead of the classical sliding window object detection paradigm, we employ an objectness measure to produce a small set of candidate windows efficiently using Binarized Normed Gradients and a Laplacian of Gaussian-like filter. We then extract Locally Adaptive Regression Kernels (LARKs) as descriptors both from a model image and the candidate windows which measure the likeness of a pixel to its surroundings. Using a matrix cosine similarity measure, the algorithm yields a scalar resemblance map, indicating the likelihood of similarity between the model and the candidate windows. By employing nonparametric significance tests and non-maxima suppression, we detect the presence of objects similar to the given model. Experiments show that the proposed detection paradigm can automatically detect the presence, the number, as well as location of similar objects to the given model. The high quality and efficiency of our method make it suitable for real time multi-category object detection applications.
Bayesian B-spline mapping for dynamic quantitative traits.
Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong
2012-04-01
Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.
Wavelets based on Hermite cubic splines
NASA Astrophysics Data System (ADS)
Cvejnová, Daniela; Černá, Dana; Finěk, Václav
2016-06-01
In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.
Technology Transfer Automated Retrieval System (TEKTRAN)
Prediction equations of energy expenditure (EE) using accelerometers and miniaturized heart rate (HR) monitors have been developed in older children and adults but not in preschool-aged children. Because the relationships between accelerometer counts (ACs), HR, and EE are confounded by growth and ma...
Spline screw payload fastening system
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
1993-01-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the
Mathematical research on spline functions
NASA Technical Reports Server (NTRS)
Horner, J. M.
1973-01-01
One approach in spline functions is to grossly estimate the integrand in J and exactly solve the resulting problem. If the integrand in J is approximated by Y" squared, the resulting problem lends itself to exact solution, the familiar cubic spline. Another approach is to investigate various approximations to the integrand in J and attempt to solve the resulting problems. The results are described.
Theory, computation, and application of exponential splines
NASA Technical Reports Server (NTRS)
Mccartin, B. J.
1981-01-01
A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.
Quadratic spline subroutine package
Rasmussen, Lowell A.
1982-01-01
A continuous piecewise quadratic function with continuous first derivative is devised for approximating a single-valued, but unknown, function represented by a set of discrete points. The quadratic is proposed as a treatment intermediate between using the angular (but reliable, easily constructed and manipulated) piecewise linear function and using the smoother (but occasionally erratic) cubic spline. Neither iteration nor the solution of a system of simultaneous equations is necessary to determining the coefficients. Several properties of the quadratic function are given. A set of five short FORTRAN subroutines is provided for generating the coefficients (QSC), finding function value and derivatives (QSY), integrating (QSI), finding extrema (QSE), and computing arc length and the curvature-squared integral (QSK). (USGS)
NASA Astrophysics Data System (ADS)
Ijima, Yusuke; Nose, Takashi; Tachibana, Makoto; Kobayashi, Takao
In this paper, we propose a rapid model adaptation technique for emotional speech recognition which enables us to extract paralinguistic information as well as linguistic information contained in speech signals. This technique is based on style estimation and style adaptation using a multiple-regression HMM (MRHMM). In the MRHMM, the mean parameters of the output probability density function are controlled by a low-dimensional parameter vector, called a style vector, which corresponds to a set of the explanatory variables of the multiple regression. The recognition process consists of two stages. In the first stage, the style vector that represents the emotional expression category and the intensity of its expressiveness for the input speech is estimated on a sentence-by-sentence basis. Next, the acoustic models are adapted using the estimated style vector, and then standard HMM-based speech recognition is performed in the second stage. We assess the performance of the proposed technique in the recognition of simulated emotional speech uttered by both professional narrators and non-professional speakers.
Super-Drizzle: Applications of Adaptive Kernel Regression in Astronomical Imaging
2006-01-01
and reconstruction. In contrast to the parametric methods, which rely on a specific model of the signal of interest, non-parametric methods rely on...the data itself to dictate the structure of the model , in which case this implicit model is referred to as a regression function. We promote the use...takes advantage of a generic model that is appropriate for reconstructing images contam- inated with different noise models , including additive Gaussian
Self-Aligning, Spline-Locking Fastener
NASA Technical Reports Server (NTRS)
Vranish, John M.
1992-01-01
Self-aligning, spline-locking fastener is two-part mechanism operated by robot, using one tool and simple movements. Spline nut on springloaded screw passes through mating spline fitting. Operator turns screw until vertical driving surfaces on spline nut rest against corresponding surfaces of spline fitting. Nut rides upward, drawing pieces together. Used to join two parts of structure, to couple vehicles, or to mount payload in vehicle.
A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression
Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin
2012-01-01
To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.
Astronomical Methods for Nonparametric Regression
NASA Astrophysics Data System (ADS)
Steinhardt, Charles L.; Jermyn, Adam
2017-01-01
I will discuss commonly used techniques for nonparametric regression in astronomy. We find that several of them, particularly running averages and running medians, are generically biased, asymmetric between dependent and independent variables, and perform poorly in recovering the underlying function, even when errors are present only in one variable. We then examine less-commonly used techniques such as Multivariate Adaptive Regressive Splines and Boosted Trees and find them superior in bias, asymmetry, and variance both theoretically and in practice under a wide range of numerical benchmarks. In this context the chief advantage of the common techniques is runtime, which even for large datasets is now measured in microseconds compared with milliseconds for the more statistically robust techniques. This points to a tradeoff between bias, variance, and computational resources which in recent years has shifted heavily in favor of the more advanced methods, primarily driven by Moore's Law. Along these lines, we also propose a new algorithm which has better overall statistical properties than all techniques examined thus far, at the cost of significantly worse runtime, in addition to providing guidance on choosing the nonparametric regression technique most suitable to any specific problem. We then examine the more general problem of errors in both variables and provide a new algorithm which performs well in most cases and lacks the clear asymmetry of existing non-parametric methods, which fail to account for errors in both variables.
Density Deconvolution With EPI Splines
2015-09-01
to, three main categories: signal processing , image processing , and probability density estimation. Epi-spline technology has also been used in...refers to the unknown na- ture of the input signal. One major application of blind deconvolution algorithms is in image processing . In this field, an...literature, historical medical data, and a scenario in uncertainty quantification in fluid dynamics. Results show that deconvolution via epi-splines is
Yoo, Yun Joo; Sun, Lei; Poirier, Julia G.; Paterson, Andrew D.
2016-01-01
ABSTRACT By jointly analyzing multiple variants within a gene, instead of one at a time, gene‐based multiple regression can improve power, robustness, and interpretation in genetic association analysis. We investigate multiple linear combination (MLC) test statistics for analysis of common variants under realistic trait models with linkage disequilibrium (LD) based on HapMap Asian haplotypes. MLC is a directional test that exploits LD structure in a gene to construct clusters of closely correlated variants recoded such that the majority of pairwise correlations are positive. It combines variant effects within the same cluster linearly, and aggregates cluster‐specific effects in a quadratic sum of squares and cross‐products, producing a test statistic with reduced degrees of freedom (df) equal to the number of clusters. By simulation studies of 1000 genes from across the genome, we demonstrate that MLC is a well‐powered and robust choice among existing methods across a broad range of gene structures. Compared to minimum P‐value, variance‐component, and principal‐component methods, the mean power of MLC is never much lower than that of other methods, and can be higher, particularly with multiple causal variants. Moreover, the variation in gene‐specific MLC test size and power across 1000 genes is less than that of other methods, suggesting it is a complementary approach for discovery in genome‐wide analysis. The cluster construction of the MLC test statistics helps reveal within‐gene LD structure, allowing interpretation of clustered variants as haplotypic effects, while multiple regression helps to distinguish direct and indirect associations. PMID:27885705
NASA Astrophysics Data System (ADS)
Gupta, Kinjal Dhar; Vilalta, Ricardo; Asadourian, Vicken; Macri, Lucas
2014-05-01
We describe an approach to automate the classification of Cepheid variable stars into two subtypes according to their pulsation mode. Automating such classification is relevant to obtain a precise determination of distances to nearby galaxies, which in addition helps reduce the uncertainty in the current expansion of the universe. One main difficulty lies in the compatibility of models trained using different galaxy datasets; a model trained using a training dataset may be ineffectual on a testing set. A solution to such difficulty is to adapt predictive models across domains; this is necessary when the training and testing sets do not follow the same distribution. The gist of our methodology is to train a predictive model on a nearby galaxy (e.g., Large Magellanic Cloud), followed by a model-adaptation step to make the model operable on other nearby galaxies. We follow a parametric approach to density estimation by modeling the training data (anchor galaxy) using a mixture of linear models. We then use maximum likelihood to compute the right amount of variable displacement, until the testing data closely overlaps the training data. At that point, the model can be directly used in the testing data (target galaxy).
Examination of the Circle Spline Routine
NASA Technical Reports Server (NTRS)
Dolin, R. M.; Jaeger, D. L.
1985-01-01
The Circle Spline routine is currently being used for generating both two and three dimensional spline curves. It was developed for use in ESCHER, a mesh generating routine written to provide a computationally simple and efficient method for building meshes along curved surfaces. Circle Spline is a parametric linear blending spline. Because many computerized machining operations involve circular shapes, the Circle Spline is well suited for both the design and manufacturing processes and shows promise as an alternative to the spline methods currently supported by the Initial Graphics Specification (IGES).
Borowsky, Richard
2013-07-11
The forces driving the evolutionary loss or simplification of traits such as vision and pigmentation in cave animals are still debated. Three alternative hypotheses are direct selection against the trait, genetic drift, and indirect selection due to antagonistic pleiotropy. Recent work establishes that Astyanax cavefish exhibit vibration attraction behavior (VAB), a presumed behavioral adaptation to finding food in the dark not exhibited by surface fish. Genetic analysis revealed two regions in the genome with quantitative trait loci (QTL) for both VAB and eye size. These observations were interpreted as genetic evidence that selection for VAB indirectly drove eye regression through antagonistic pleiotropy and, further, that this is a general mechanism to account for regressive evolution. These conclusions are unsupported by the data; the analysis fails to establish pleiotropy and ignores the numerous other QTL that map to, and potentially interact, in the same regions. It is likely that all three forces drive evolutionary change. We will be able to distinguish among them in individual cases only when we have identified the causative alleles and characterized their effects.
Numerical Methods Using B-Splines
NASA Technical Reports Server (NTRS)
Shariff, Karim; Merriam, Marshal (Technical Monitor)
1997-01-01
The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.
L1 Splines with Locally Computed Coefficients
2013-01-01
Fang. Univariate Cubic L1 Interpolating Splines : Analytical Results for Linearity, Convexity and Oscillation on 5-PointWindows, Algorithms, (07 2010...0. doi: 10.3390/a3030276 07/21/2011 2.00 Lu Yu, Qingwei Jin, John E. Lavery, Shu-Cherng Fang. Univariate Cubic L1 Interpolating Splines : Spline ...Qingwei Jin, Lu Yu, John E. Lavery, Shu-Cherng Fang. Univariate cubic L1 interpolating splines basedon the first derivative and on 5-point windows
Weighted cubic and biharmonic splines
NASA Astrophysics Data System (ADS)
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
Cubic spline functions for curve fitting
NASA Technical Reports Server (NTRS)
Young, J. D.
1972-01-01
FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.
Hermite cubic spline multi-wavelets on the cube
NASA Astrophysics Data System (ADS)
Cvejnová, Daniela; Černá, Dana; Finěk, Václav
2015-11-01
In 2000, W. Dahmen et al. proposed a construction of Hermite cubic spline multi-wavelets adapted to the interval [0, 1]. Later, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet basis with respect to which both the mass and stiffness matrices are sparse in the sense that the number of non-zero elements in each column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions, use an anisotropic tensor product to obtain bases on the cube [0, 1]3, and compare their condition numbers.
Strappini, Francesca; Gilboa, Elad; Pitzalis, Sabrina; Kay, Kendrick; McAvoy, Mark; Nehorai, Arye; Snyder, Abraham Z
2017-03-01
Temporal and spatial filtering of fMRI data is often used to improve statistical power. However, conventional methods, such as smoothing with fixed-width Gaussian filters, remove fine-scale structure in the data, necessitating a tradeoff between sensitivity and specificity. Specifically, smoothing may increase sensitivity (reduce noise and increase statistical power) but at the cost loss of specificity in that fine-scale structure in neural activity patterns is lost. Here, we propose an alternative smoothing method based on Gaussian processes (GP) regression for single subjects fMRI experiments. This method adapts the level of smoothing on a voxel by voxel basis according to the characteristics of the local neural activity patterns. GP-based fMRI analysis has been heretofore impractical owing to computational demands. Here, we demonstrate a new implementation of GP that makes it possible to handle the massive data dimensionality of the typical fMRI experiment. We demonstrate how GP can be used as a drop-in replacement to conventional preprocessing steps for temporal and spatial smoothing in a standard fMRI pipeline. We present simulated and experimental results that show the increased sensitivity and specificity compared to conventional smoothing strategies. Hum Brain Mapp 38:1438-1459, 2017. © 2016 Wiley Periodicals, Inc.
Herndon, Nic; Caragea, Doina
2016-01-01
Supervised classifiers are highly dependent on abundant labeled training data. Alternatives for addressing the lack of labeled data include: labeling data (but this is costly and time consuming); training classifiers with abundant data from another domain (however, the classification accuracy usually decreases as the distance between domains increases); or complementing the limited labeled data with abundant unlabeled data from the same domain and learning semi-supervised classifiers (but the unlabeled data can mislead the classifier). A better alternative is to use both the abundant labeled data from a source domain, the limited labeled data and optionally the unlabeled data from the target domain to train classifiers in a domain adaptation setting. We propose two such classifiers, based on logistic regression, and evaluate them for the task of splice site prediction – a difficult and essential step in gene prediction. Our classifiers achieved high accuracy, with highest areas under the precision-recall curve between 50.83% and 82.61%. PMID:26849871
Semiparametric regression during 2003–2007*
Ruppert, David; Wand, M.P.; Carroll, Raymond J.
2010-01-01
Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.
Wilke, Marko; Altaye, Mekibib; Holland, Scott K
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation
Wilke, Marko; Altaye, Mekibib; Holland, Scott K.
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348
Spline screw multiple rotations mechanism
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
1993-01-01
A system for coupling two bodies together and for transmitting torque from one body to another with mechanical timing and sequencing is reported. The mechanical timing and sequencing is handled so that the following criteria are met: (1) the bodies are handled in a safe manner and nothing floats loose in space, (2) electrical connectors are engaged as long as possible so that the internal processes can be monitored throughout by sensors, and (3) electrical and mechanical power and signals are coupled. The first body has a splined driver for providing the input torque. The second body has a threaded drive member capable of rotation and limited translation. The embedded drive member will mate with and fasten to the splined driver. The second body has an embedded bevel gear member capable of rotation and limited translation. This bevel gear member is coaxial with the threaded drive member. A compression spring provides a preload on the rotating threaded member, and a thrust bearing is used for limiting the translation of the bevel gear member so that when the bevel gear member reaches the upward limit of its translation the two bodies are fully coupled and the bevel gear member then rotates due to the input torque transmitted from the splined driver through the threaded drive member to the bevel gear member. An output bevel gear with an attached output drive shaft is embedded in the second body and meshes with the threaded rotating bevel gear member to transmit the input torque to the output drive shaft.
Regularization of B-Spline Objects.
Xu, Guoliang; Bajaj, Chandrajit
2011-01-01
By a d-dimensional B-spline object (denoted as ), we mean a B-spline curve (d = 1), a B-spline surface (d = 2) or a B-spline volume (d = 3). By regularization of a B-spline object we mean the process of relocating the control points of such that they approximate an isometric map of its definition domain in certain directions and is shape preserving. In this paper we develop an efficient regularization method for , d = 1, 2, 3 based on solving weak form L(2)-gradient flows constructed from the minimization of certain regularizing energy functionals. These flows are integrated via the finite element method using B-spline basis functions. Our experimental results demonstrate that our new regularization method is very effective.
Stochastic dynamic models and Chebyshev splines
Fan, Ruzong; Zhu, Bin; Wang, Yuedong
2015-01-01
In this article, we establish a connection between a stochastic dynamic model (SDM) driven by a linear stochastic differential equation (SDE) and a Chebyshev spline, which enables researchers to borrow strength across fields both theoretically and numerically. We construct a differential operator for the penalty function and develop a reproducing kernel Hilbert space (RKHS) induced by the SDM and the Chebyshev spline. The general form of the linear SDE allows us to extend the well-known connection between an integrated Brownian motion and a polynomial spline to a connection between more complex diffusion processes and Chebyshev splines. One interesting special case is connection between an integrated Ornstein–Uhlenbeck process and an exponential spline. We use two real data sets to illustrate the integrated Ornstein–Uhlenbeck process model and exponential spline model and show their estimates are almost identical. PMID:26045632
Improved Spline Coupling For Robotic Docking
NASA Technical Reports Server (NTRS)
Vranish, John M.
1995-01-01
Robotic docking mechanism like one described in "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430) improved. Spline coupling redesigned to reduce stresses, enchancing performance and safety of mechanism. Does not involve significant increase in size. Convex spherical surfaces on spline driver mate with concave spherical surfaces on undersides on splines in receptacle. Spherical surfaces distribute load stresses better and tolerate misalignments better than flat and otherwise shaped surfaces.
Monotone and convex quadratic spline interpolation
NASA Technical Reports Server (NTRS)
Lam, Maria H.
1990-01-01
A method for producing interpolants that preserve the monotonicity and convexity of discrete data is described. It utilizes the quadratic spline proposed by Schumaker (1983) which was subsequently characterized by De Vore and Yan (1986). The selection of first order derivatives at the given data points is essential to this spline. An observation made by De Vore and Yan is generalized, and an improved method to select these derivatives is proposed. The resulting spline is completely local, efficient, and simple to implement.
Algorithms for spline and other approximations to functions and data
NASA Astrophysics Data System (ADS)
Phillips, G. M.; Taylor, P. J.
1992-12-01
A succinct introduction to splines, explaining how and why B-splines are used as a basis and how cubic and quadratic splines may be constructed, is followed by brief account of Hermite interpolation and Padé approximations.
High-frequency health data and spline functions.
Martín-Rodríguez, Gloria; Murillo-Fort, Carlos
2005-03-30
Seasonal variations are highly relevant for health service organization. In general, short run movements of medical magnitudes are important features for managers in this field to make adequate decisions. Thus, the analysis of the seasonal pattern in high-frequency health data is an appealing task. The aim of this paper is to propose procedures that allow the analysis of the seasonal component in this kind of data by means of spline functions embedded into a structural model. In the proposed method, useful adaptions of the traditional spline formulation are developed, and the resulting procedures are capable of capturing periodic variations, whether deterministic or stochastic, in a parsimonious way. Finally, these methodological tools are applied to a series of daily emergency service demand in order to capture simultaneous seasonal variations in which periods are different.
Spline-based procedures for dose-finding studies with active control
Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim
2015-01-01
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931
Spline-based procedures for dose-finding studies with active control.
Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim
2015-01-30
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose-response relationship and to find the smallest target dose concentration d(*), which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose-response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose-response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs.
Flexible coiled spline securely joins mating cylinders
NASA Technical Reports Server (NTRS)
Coppernol, R. W.
1966-01-01
Mating cylindrical members are joined by spline to form an integral structure. The spline is made of tightly coiled, high tensile-strength steel spiral wire that fits a groove between the mating members. It provides a continuous bearing surface for axial thrust between the members.
Multicategorical Spline Model for Item Response Theory.
ERIC Educational Resources Information Center
Abrahamowicz, Michal; Ramsay, James O.
1992-01-01
A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)
Radial spline assembly for antifriction bearings
NASA Technical Reports Server (NTRS)
Moore, Jerry H. (Inventor)
1993-01-01
An outer race carrier is constructed for receiving an outer race of an antifriction bearing assembly. The carrier in turn is slidably fitted in an opening of a support wall to accommodate slight axial movements of a shaft. A plurality of longitudinal splines on the carrier are disposed to be fitted into matching slots in the opening. A deadband gap is provided between sides of the splines and slots, with a radial gap at ends of the splines and slots and a gap between the splines and slots sized larger than the deadband gap. With this construction, operational distortions (slope) of the support wall are accommodated by the larger radial gaps while the deadband gaps maintain a relatively high springrate of the housing. Additionally, side loads applied to the shaft are distributed between sides of the splines and slots, distributing such loads over a larger surface area than a race carrier of the prior art.
B-spline Method in Fluid Dynamics
NASA Technical Reports Server (NTRS)
Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)
2001-01-01
B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.
NASA Astrophysics Data System (ADS)
Samhouri, M.; Al-Ghandoor, A.; Fouad, R. H.
2009-08-01
In this study two techniques, for modeling electricity consumption of the Jordanian industrial sector, are presented: (i) multivariate linear regression and (ii) neuro-fuzzy models. Electricity consumption is modeled as function of different variables such as number of establishments, number of employees, electricity tariff, prevailing fuel prices, production outputs, capacity utilizations, and structural effects. It was found that industrial production and capacity utilization are the most important variables that have significant effect on future electrical power demand. The results showed that both the multivariate linear regression and neuro-fuzzy models are generally comparable and can be used adequately to simulate industrial electricity consumption. However, comparison that is based on the square root average squared error of data suggests that the neuro-fuzzy model performs slightly better for future prediction of electricity consumption than the multivariate linear regression model. Such results are in full agreement with similar work, using different methods, for other countries.
Material approximation of data smoothing and spline curves inspired by slime mould.
Jones, Jeff; Adamatzky, Andrew
2014-09-01
The giant single-celled slime mould Physarum polycephalum is known to approximate a number of network problems via growth and adaptation of its protoplasmic transport network and can serve as an inspiration towards unconventional, material-based computation. In Physarum, predictable morphological adaptation is prevented by its adhesion to the underlying substrate. We investigate what possible computations could be achieved if these limitations were removed and the organism was free to completely adapt its morphology in response to changing stimuli. Using a particle model of Physarum displaying emergent morphological adaptation behaviour, we demonstrate how a minimal approach to collective material computation may be used to transform and summarise properties of spatially represented datasets. We find that the virtual material relaxes more strongly to high-frequency changes in data, which can be used for the smoothing (or filtering) of data by approximating moving average and low-pass filters in 1D datasets. The relaxation and minimisation properties of the model enable the spatial computation of B-spline curves (approximating splines) in 2D datasets. Both clamped and unclamped spline curves of open and closed shapes can be represented, and the degree of spline curvature corresponds to the relaxation time of the material. The material computation of spline curves also includes novel quasi-mechanical properties, including unwinding of the shape between control points and a preferential adhesion to longer, straighter paths. Interpolating splines could not directly be approximated due to the formation and evolution of Steiner points at narrow vertices, but were approximated after rectilinear pre-processing of the source data. This pre-processing was further simplified by transforming the original data to contain the material inside the polyline. These exemplary results expand the repertoire of spatially represented unconventional computing devices by demonstrating a
Rational-spline approximation with automatic tension adjustment
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Kerr, P. A.
1984-01-01
An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.
NASA Astrophysics Data System (ADS)
Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine
2016-04-01
Scenarios of surface weather required for the impact studies have to be unbiased and adapted to the space and time scales of the considered hydro-systems. Hence, surface weather scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Among those SDMs, approaches based on regression are often applied. For a given station, a regression link can be established between a set of large scale atmospheric predictors and the surface weather variable. These links are then used for the prediction of the latter. However, physical processes generating surface weather vary in time. This is well known for precipitation for instance. The most relevant predictors and the regression link are also likely to vary in time. A better prediction skill is thus classically obtained with a seasonal stratification of the data. Another strategy is to identify the most relevant predictor set and establish the regression link from dates that are similar - or analog - to the target date. In practice, these dates can be selected thanks to an analog model. In this study, we explore the possibility of improving the local performance of an analog model - where the analogy is applied to the geopotential heights 1000 and 500 hPa - using additional local scale predictors for the probabilistic prediction of the Safran precipitation over France. For each prediction day, the prediction is obtained from two GLM regression models - for both the occurrence and the quantity of precipitation - for which predictors and parameters are estimated from the analog dates. Firstly, the resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. Secondly, the selected predictors for a given prediction depend on the large scale situation and on the
Oubida, Regis W.; Gantulga, Dashzeveg; Zhang, Man; Zhou, Lecong; Bawa, Rajesh; Holliday, Jason A.
2015-01-01
Local adaptation to climate in temperate forest trees involves the integration of multiple physiological, morphological, and phenological traits. Latitudinal clines are frequently observed for these traits, but environmental constraints also track longitude and altitude. We combined extensive phenotyping of 12 candidate adaptive traits, multivariate regression trees, quantitative genetics, and a genome-wide panel of SNP markers to better understand the interplay among geography, climate, and adaptation to abiotic factors in Populus trichocarpa. Heritabilities were low to moderate (0.13–0.32) and population differentiation for many traits exceeded the 99th percentile of the genome-wide distribution of FST, suggesting local adaptation. When climate variables were taken as predictors and the 12 traits as response variables in a multivariate regression tree analysis, evapotranspiration (Eref) explained the most variation, with subsequent splits related to mean temperature of the warmest month, frost-free period (FFP), and mean annual precipitation (MAP). These grouping matched relatively well the splits using geographic variables as predictors: the northernmost groups (short FFP and low Eref) had the lowest growth, and lowest cold injury index; the southern British Columbia group (low Eref and intermediate temperatures) had average growth and cold injury index; the group from the coast of California and Oregon (high Eref and FFP) had the highest growth performance and the highest cold injury index; and the southernmost, high-altitude group (with high Eref and low FFP) performed poorly, had high cold injury index, and lower water use efficiency. Taken together, these results suggest variation in both temperature and water availability across the range shape multivariate adaptive traits in poplar. PMID:25870603
Oubida, Regis W; Gantulga, Dashzeveg; Zhang, Man; Zhou, Lecong; Bawa, Rajesh; Holliday, Jason A
2015-01-01
Local adaptation to climate in temperate forest trees involves the integration of multiple physiological, morphological, and phenological traits. Latitudinal clines are frequently observed for these traits, but environmental constraints also track longitude and altitude. We combined extensive phenotyping of 12 candidate adaptive traits, multivariate regression trees, quantitative genetics, and a genome-wide panel of SNP markers to better understand the interplay among geography, climate, and adaptation to abiotic factors in Populus trichocarpa. Heritabilities were low to moderate (0.13-0.32) and population differentiation for many traits exceeded the 99th percentile of the genome-wide distribution of FST, suggesting local adaptation. When climate variables were taken as predictors and the 12 traits as response variables in a multivariate regression tree analysis, evapotranspiration (Eref) explained the most variation, with subsequent splits related to mean temperature of the warmest month, frost-free period (FFP), and mean annual precipitation (MAP). These grouping matched relatively well the splits using geographic variables as predictors: the northernmost groups (short FFP and low Eref) had the lowest growth, and lowest cold injury index; the southern British Columbia group (low Eref and intermediate temperatures) had average growth and cold injury index; the group from the coast of California and Oregon (high Eref and FFP) had the highest growth performance and the highest cold injury index; and the southernmost, high-altitude group (with high Eref and low FFP) performed poorly, had high cold injury index, and lower water use efficiency. Taken together, these results suggest variation in both temperature and water availability across the range shape multivariate adaptive traits in poplar.
Biomechanical Analysis with Cubic Spline Functions
ERIC Educational Resources Information Center
McLaughlin, Thomas M.; And Others
1977-01-01
Results of experimentation suggest that the cubic spline is a convenient and consistent method for providing an accurate description of displacement-time data and for obtaining the corresponding time derivatives. (MJB)
Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year
NASA Astrophysics Data System (ADS)
Kamaruddin, Halim Shukri; Ismail, Noriszura
2014-06-01
Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.
Liu, Shujie; Kawamoto, Taisuke; Morita, Osamu; Yoshinari, Kouichi; Honda, Hiroshi
2017-03-01
Chemical exposure often results in liver hypertrophy in animal tests, characterized by increased liver weight, hepatocellular hypertrophy, and/or cell proliferation. While most of these changes are considered adaptive responses, there is concern that they may be associated with carcinogenesis. In this study, we have employed a toxicogenomic approach using a logistic ridge regression model to identify genes responsible for liver hypertrophy and hypertrophic hepatocarcinogenesis and to develop a predictive model for assessing hypertrophy-inducing compounds. Logistic regression models have previously been used in the quantification of epidemiological risk factors. DNA microarray data from the Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System were used to identify hypertrophy-related genes that are expressed differently in hypertrophy induced by carcinogens and non-carcinogens. Data were collected for 134 chemicals (72 non-hypertrophy-inducing chemicals, 27 hypertrophy-inducing non-carcinogenic chemicals, and 15 hypertrophy-inducing carcinogenic compounds). After applying logistic ridge regression analysis, 35 genes for liver hypertrophy (e.g., Acot1 and Abcc3) and 13 genes for hypertrophic hepatocarcinogenesis (e.g., Asns and Gpx2) were selected. The predictive models built using these genes were 94.8% and 82.7% accurate, respectively. Pathway analysis of the genes indicates that, aside from a xenobiotic metabolism-related pathway as an adaptive response for liver hypertrophy, amino acid biosynthesis and oxidative responses appear to be involved in hypertrophic hepatocarcinogenesis. Early detection and toxicogenomic characterization of liver hypertrophy using our models may be useful for predicting carcinogenesis. In addition, the identified genes provide novel insight into discrimination between adverse hypertrophy associated with carcinogenesis and adaptive hypertrophy in risk assessment.
Guo, Ying; Bowman, F DuBois
2008-04-01
For functional neuroimaging studies that involve experimental stimuli measuring dose levels, e.g. of an anesthetic agent, typical statistical techniques include correlation analysis, analysis of variance or polynomial regression models. These standard approaches have limitations: correlation analysis only provides a crude estimate of the linear relationship between dose levels and brain activity; ANOVA is designed to accommodate a few specified dose levels; polynomial regression models have limited capacity to model varying patterns of association between dose levels and measured activity across the brain. These shortcomings prompt the need to develop methods that more effectively capture dose-dependent neural processing responses. We propose a class of mixed effects spline models that analyze the dose-dependent effect using either regression or smoothing splines. Our method offers flexible accommodation of different response patterns across various brain regions, controls for potential confounding factors, and accounts for subject variability in brain function. The estimates from the mixed effects spline model can be readily incorporated into secondary analyses, for instance, targeting spatial classifications of brain regions according to their modeled response profiles. The proposed spline models are also extended to incorporate interaction effects between the dose-dependent response function and other factors. We illustrate our proposed statistical methodology using data from a PET study of the effect of ethanol on brain function. A simulation study is conducted to compare the performance of the proposed mixed effects spline models and a polynomial regression model. Results show that the proposed spline models more accurately capture varying response patterns across voxels, especially at voxels with complex response shapes. Finally, the proposed spline models can be used in more general settings as a flexible modeling tool for investigating the effects of any
Twelfth degree spline with application to quadrature.
Mohammed, P O; Hamasalh, F K
2016-01-01
In this paper existence and uniqueness of twelfth degree spline is proved with application to quadrature. This formula is in the class of splines of degree 12 and continuity order [Formula: see text] that matches the derivatives up to order 6 at the knots of a uniform partition. Some mistakes in the literature are pointed out and corrected. Numerical examples are given to illustrate the applicability and efficiency of the new method.
NASA Astrophysics Data System (ADS)
Peeters, J.; Arroud, G.; Ribbens, B.; Dirckx, J. J. J.; Steenackers, G.
2015-12-01
In non-destructive evaluation the use of finite element models to evaluate structural behavior and experimental setup optimization can complement with the inspector's experience. A new adaptive response surface methodology, especially adapted for thermal problems, is used to update the experimental setup parameters in a finite element model to the state of the test sample measured by pulsed thermography. Poly Vinyl Chloride (PVC) test samples are used to examine the results for thermal insulator models. A comparison of the achieved results is made by changing the target values from experimental pulsed thermography data to a fixed validation model. Several optimizers are compared and discussed with the focus on speed and accuracy. A time efficiency increase of over 20 and an accuracy of over 99.5% are achieved by the choice of the correct parameter sets and optimizer. Proper parameter set selection criteria are defined and the influence of the choice of the optimization algorithm and parameter set on the accuracy and convergence time are investigated.
Yoshizawa, Masato; O'Quin, Kelly E; Jeffery, William R
2013-07-11
Vibration attraction behavior (VAB) is the swimming of fish toward an oscillating object, a behavior that is likely adaptive because it increases foraging efficiency in darkness. VAB is seen in a small proportion of Astyanax surface-dwelling populations (surface fish) but is pronounced in cave-dwelling populations (cavefish). In a recent study, we identified two quantitative trait loci for VAB on Astyanax linkage groups 2 and 17. We also demonstrated that a small population of superficial neuromast sensors located within the eye orbit (EO SN) facilitate VAB, and two quantitative trait loci (QTL) were identified for EO SN that were congruent with those for VAB. Finally, we showed that both VAB and EO SN are negatively correlated with eye size, and that two (of several) QTL for eye size overlap VAB and EO SN QTLs. From these results, we concluded that the adaptive evolution of VAB and EO SN has contributed to the indirect loss of eyes in cavefish, either as a result of pleiotropy or tight physical linkage of the mutations underlying these traits. In a subsequent commentary, Borowsky argues that there is poor experimental support for our conclusions. Specifically, Borowsky states that: (1) linkage groups (LGs) 2 and 17 harbor QTL for many traits and, therefore, no evidence exists for an exclusive interaction among the overlapping VAB, EO SN and eye size QTL; (2) some of the QTL we identified are too broad (>20 cM) to support the hypothesis of correlated evolution due to pleiotropy or hitchhiking; and (3) VAB is unnecessary to explain the indirect evolution of eye-loss since the negative polarity of numerous eye QTL is consistent with direct selection against eyes. Borowsky further argues that (4) it is difficult to envision an evolutionary scenario whereby VAB and EO SN drive eye loss, since the eyes must first be reduced in order to increase the number of EO SN and, therefore, VAB. In this response, we explain why the evidence of one trait influencing eye reduction
Serag, Ahmed; Aljabar, Paul; Ball, Gareth; Counsell, Serena J; Boardman, James P; Rutherford, Mary A; Edwards, A David; Hajnal, Joseph V; Rueckert, Daniel
2012-02-01
Medical imaging has shown that, during early development, the brain undergoes more changes in size, shape and appearance than at any other time in life. A better understanding of brain development requires a spatio-temporal atlas that characterizes the dynamic changes during this period. In this paper we present an approach for constructing a 4D atlas of the developing brain, between 28 and 44 weeks post-menstrual age at time of scan, using T1 and T2 weighted MR images from 204 premature neonates. The method used for the creation of the average 4D atlas utilizes non-rigid registration between all pairs of images to eliminate bias in the atlas toward any of the original images. In addition, kernel regression is used to produce age-dependent anatomical templates. A novelty in our approach is the use of a time-varying kernel width, to overcome the variations in the distribution of subjects at different ages. This leads to an atlas that retains a consistent level of detail at every time-point. Comparisons between the resulting atlas and atlases constructed using affine and non-rigid registration are presented. The resulting 4D atlas has greater anatomic definition than currently available 4D atlases created using various affine and non-rigid registration approaches, an important factor in improving registrations between the atlas and individual subjects. Also, the resulting 4D atlas can serve as a good representative of the population of interest as it reflects both global and local changes. The atlas is publicly available at www.brain-development.org.
A Bayesian-optimized spline representation of the electrocardiogram.
Guilak, F G; McNames, J
2013-11-01
We introduce an implementation of a novel spline framework for parametrically representing electrocardiogram (ECG) waveforms. This implementation enables a flexible means to study ECG structure in large databases. Our algorithm allows researchers to identify key points in the waveform and optimally locate them in long-term recordings with minimal manual effort, thereby permitting analysis of trends in the points themselves or in metrics derived from their locations. In the work described here we estimate the location of a number of commonly-used characteristic points of the ECG signal, defined as the onsets, peaks, and offsets of the P, QRS, T, and R' waves. The algorithm applies Bayesian optimization to a linear spline representation of the ECG waveform. The location of the knots-which are the endpoints of the piecewise linear segments used in the spline representation of the signal-serve as the estimate of the waveform's characteristic points. We obtained prior information of knot times, amplitudes, and curvature from a large manually-annotated training dataset and used the priors to optimize a Bayesian figure of merit based on estimated knot locations. In cases where morphologies vary or are subject to noise, the algorithm relies more heavily on the estimated priors for its estimate of knot locations. We compared optimized knot locations from our algorithm to two sets of manual annotations on a prospective test data set comprising 200 beats from 20 subjects not in the training set. Mean errors of characteristic point locations were less than four milliseconds, and standard deviations of errors compared favorably against reference values. This framework can easily be adapted to include additional points of interest in the ECG signal or for other biomedical detection problems on quasi-periodic signals.
Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia
2015-01-01
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801
Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.
On the B-splines effective completeness
NASA Astrophysics Data System (ADS)
Argenti, Luca; Colle, Renato
2009-09-01
Effective completeness of B-splines, defined as the capability of approaching completeness without compromising the positive definite character of the corresponding superposition matrix, is investigated. A general result on the limit solution of the spectrum of B-splines superposition matrices has been obtained for a large class of knots grids. The result has been tested on finite-dimensional cases using both constant and random knots spacings (uniform distribution in [0,1]). The eigenvalue distribution for random spacings is found not to exhibit any large deviation from that for constant spacings. As an example of system which takes huge advantage of a non-uniform grid of knots, we have computed few hundreds of hydrogen Rydberg states obtaining accuracy comparable to the machine accuracy. The obtained results give solid ground to the recognized efficiency and accuracy of the B-spline sets when used in atomic physics calculations.
A smoothing algorithm using cubic spline functions
NASA Technical Reports Server (NTRS)
Smith, R. E., Jr.; Price, J. M.; Howser, L. M.
1974-01-01
Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.
Schwarz and multilevel methods for quadratic spline collocation
Christara, C.C.; Smith, B.
1994-12-31
Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.
Spline-Screw Multiple-Rotation Mechanism
NASA Technical Reports Server (NTRS)
Vranish, John M.
1994-01-01
Mechanism functions like combined robotic gripper and nut runner. Spline-screw multiple-rotation mechanism related to spline-screw payload-fastening system described in (GSC-13454). Incorporated as subsystem in alternative version of system. Mechanism functions like combination of robotic gripper and nut runner; provides both secure grip and rotary actuation of other parts of system. Used in system in which no need to make or break electrical connections to payload during robotic installation or removal of payload. More complicated version needed to make and break electrical connections. Mechanism mounted in payload.
General spline filters for discontinuous Galerkin solutions
Peters, Jörg
2015-01-01
The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots. PMID:26594090
General spline filters for discontinuous Galerkin solutions.
Peters, Jörg
2015-09-01
The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots.
Procedure for converting a Wilson-Fowler spline to a cubic B-spline with double knots
Fritsch, F.N.
1987-10-14
The Wilson-Fowler spline (WF-spline) has been used by the DOE Weapons Complex for over twenty years to represent point-defined smooth curves. Most modern CADCAM systems use parametric B-spline curves (or, more recently, rational B-splines) for this same purpose. The WF-spline is a parametric piecewise cubic curve. It has been shown that a WF-spline can be reparametrized so that its components are C/sup 1/ piecewise cubic functions (functions that are cubic polynomials on each parameter interval, joined so the function and first derivative are continuous). The purpose of these notes is to show explicitly how to convert a given WF-spline to a cubic B-spline with double knots. 7 refs.
Multivariate Epi-splines and Evolving Function Identification Problems
2015-04-15
MULTIVARIATE EPI- SPLINES AND EVOLVING FUNCTION IDENTIFICATION PROBLEMS∗ Johannes O. Royset Roger J-B Wets Operations Research Department Department...fitting, and estimation. The paper develops piecewise polynomial functions, called epi- splines , that approximate any lsc function to an arbitrary...level of accuracy. Epi- splines provide the foundation for the solution of a rich class of function identification problems that incorporate general
Geha, Makram J.; Keown, Jeffrey F.; Van Vleck, L. Dale
2011-01-01
Milk yield records (305d, 2X, actual milk yield) of 123,639 registered first lactation Holstein cows were used to compare linear regression (y = β0 + β1X + e), quadratic regression, (y = β0 + β1X + β2X2 + e) cubic regression (y = β0 + β1X + β2X2 + β3X3 +e) and fixed factor models, with cubic-spline interpolation models, for estimating the effects of inbreeding on milk yield. Ten animal models, all with herd-year-season of calving as fixed effect, were compared using the Akaike corrected-Information Criterion (AICc). The cubic-spline interpolation model with seven knots had the lowest AICc, whereas for all those labeled as “traditional”, AICc was higher than the best model. Results from fitting inbreeding using a cubic-spline with seven knots were compared to results from fitting inbreeding as a linear covariate or as a fixed factor with seven levels. Estimates of inbreeding effects were not significantly different between the cubic-spline model and the fixed factor model, but were significantly different from the linear regression model. Milk yield decreased significantly at inbreeding levels greater than 9%. Variance component estimates were similar for the three models. Ranking of the top 100 sires with daughter records remained unaffected by the model used. PMID:21931517
Geha, Makram J; Keown, Jeffrey F; Van Vleck, L Dale
2011-07-01
Milk yield records (305d, 2X, actual milk yield) of 123,639 registered first lactation Holstein cows were used to compare linear regression (y = β(0) + β(1)X + e), quadratic regression, (y = β(0) + β(1)X + β(2)X(2) + e) cubic regression (y = β(0) + β(1)X + β(2)X(2) + β(3)X(3) +e) and fixed factor models, with cubic-spline interpolation models, for estimating the effects of inbreeding on milk yield. Ten animal models, all with herd-year-season of calving as fixed effect, were compared using the Akaike corrected-Information Criterion (AICc). The cubic-spline interpolation model with seven knots had the lowest AICc, whereas for all those labeled as "traditional", AICc was higher than the best model. Results from fitting inbreeding using a cubic-spline with seven knots were compared to results from fitting inbreeding as a linear covariate or as a fixed factor with seven levels. Estimates of inbreeding effects were not significantly different between the cubic-spline model and the fixed factor model, but were significantly different from the linear regression model. Milk yield decreased significantly at inbreeding levels greater than 9%. Variance component estimates were similar for the three models. Ranking of the top 100 sires with daughter records remained unaffected by the model used.
Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias
2015-01-01
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
Shaft Coupler With Friction and Spline Clutches
NASA Technical Reports Server (NTRS)
Thebert, Glenn W.
1987-01-01
Coupling, developed for rotor of lift/cruise aircraft, employs two clutches for smooth transmission of power from gas-turbine engine to rotor. Prior to ascent, coupling applies friction-type transition clutch that accelerates rotor shaft to speeds matching those of engine shaft. Once shafts synchronized, spline coupling engaged and friction clutch released to provide positive mechanical drive.
Single authentication: exposing weighted splining artifacts
NASA Astrophysics Data System (ADS)
Ciptasari, Rimba W.
2016-05-01
A common form of manipulation is to combine parts of the image fragment into another different image either to remove or blend the objects. Inspired by this situation, we propose a single authentication technique for detecting traces of weighted average splining technique. In this paper, we assume that image composite could be created by joining two images so that the edge between them is imperceptible. The weighted average technique is constructed from overlapped images so that it is possible to compute the gray level value of points within a transition zone. This approach works on the assumption that although splining process leaves the transition zone smoothly. They may, nevertheless, alter the underlying statistics of an image. In other words, it introduces specific correlation into the image. The proposed idea dealing with identifying these correlations is to generate an original model of both weighting function, left and right functions, as references to their synthetic models. The overall process of the authentication is divided into two main stages, which are pixel predictive coding and weighting function estimation. In the former stage, the set of intensity pairs {Il,Ir} is computed by exploiting pixel extrapolation technique. The least-squares estimation method is then employed to yield the weighted coefficients. We show the efficacy of the proposed scheme on revealing the splining artifacts. We believe that this is the first work that exposes the image splining artifact as evidence of digital tampering.
Spline smoothing of histograms by linear programming
NASA Technical Reports Server (NTRS)
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
An algorithm for surface smoothing with rational splines
NASA Technical Reports Server (NTRS)
Schiess, James R.
1987-01-01
Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.
Refinable C(1) spline elements for irregular quad layout.
Nguyen, Thien; Peters, Jörg
2016-03-01
Building on a result of U. Reif on removable singularities, we construct C(1) bi-3 splines that may include irregular points where less or more than four tensor-product patches meet. The resulting space complements PHT splines, is refinable and the refined spaces are nested, preserving for example surfaces constructed from the splines. As in the regular case, each quadrilateral has four degrees of freedom, each associated with one spline and the splines are linearly independent. Examples of use for surface construction and isogeometric analysis are provided.
NASA Astrophysics Data System (ADS)
Drzewiecki, Wojciech
2016-12-01
In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.
Pierson, Jeffery L; Small, Scott R; Rodriguez, Jose A; Kang, Michael N; Glassman, Andrew H
2015-07-01
Design parameters affecting initial mechanical stability of tapered, splined modular titanium stems (TSMTSs) are not well understood. Furthermore, there is considerable variability in contemporary designs. We asked if spline geometry and stem taper angle could be optimized in TSMTS to improve mechanical stability to resist axial subsidence and increase torsional stability. Initial stability was quantified with stems of varied taper angle and spline geometry implanted in a foam model replicating 2cm diaphyseal engagement. Increased taper angle and a broad spline geometry exhibited significantly greater axial stability (+21%-269%) than other design combinations. Neither taper angle nor spline geometry significantly altered initial torsional stability.
Interaction Models for Functional Regression
USSET, JOSEPH; STAICU, ANA-MARIA; MAITY, ARNAB
2015-01-01
A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data. PMID:26744549
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models.
Spline-Screw Payload-Fastening System
NASA Technical Reports Server (NTRS)
Vranish, John M.
1994-01-01
Payload handed off securely between robot and vehicle or structure. Spline-screw payload-fastening system includes mating female and male connector mechanisms. Clockwise (or counter-clockwise) rotation of splined male driver on robotic end effector causes connection between robot and payload to tighten (or loosen) and simultaneously causes connection between payload and structure to loosen (or tighten). Includes mechanisms like those described in "Tool-Changing Mechanism for Robot" (GSC-13435) and "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430). Designed for use in outer space, also useful on Earth in applications needed for secure handling and secure mounting of equipment modules during storage, transport, and/or operation. Particularly useful in machine or robotic applications.
Representing flexible endoscope shapes with hermite splines
NASA Astrophysics Data System (ADS)
Chen, Elvis C. S.; Fowler, Sharyle A.; Hookey, Lawrence C.; Ellis, Randy E.
2010-02-01
Navigation of a flexible endoscope is a challenging surgical task: the shape of the end effector of the endoscope, interacting with surrounding tissues, determine the surgical path along which the endoscope is pushed. We present a navigational system that visualized the shape of the flexible endoscope tube to assist gastrointestinal surgeons in performing Natural Orifice Translumenal Endoscopic Surgery (NOTES). The system used an electromagnetic positional tracker, a catheter embedded with multiple electromagnetic sensors, and graphical user interface for visualization. Hermite splines were used to interpret the position and direction outputs of the endoscope sensors. We conducted NOTES experiments on live swine involving 6 gastrointestinal and 6 general surgeons. Participants who used the device first were 14.2% faster than when not using the device. Participants who used the device second were 33.6% faster than the first session. The trend suggests that spline-based visualization is a promising adjunct during NOTES procedures.
Spline-based semiparametric projected generalized estimating equation method for panel count data.
Hua, Lei; Zhang, Ying
2012-07-01
We propose to analyze panel count data using a spline-based semiparametric projected generalized estimating equation (GEE) method with the proportional mean model E(N(t)|Z) = Λ(0)(t) e(β(0)(T)Z). The natural logarithm of the baseline mean function, logΛ(0)(t), is approximated by a monotone cubic B-spline function. The estimates of regression parameters and spline coefficients are obtained by projecting the GEE estimates into the feasible domain using a weighted isotonic regression (IR). The proposed method avoids assuming any parametric structure of the baseline mean function or any stochastic model for the underlying counting process. Selection of the working covariance matrix that accounts for overdispersion improves the estimation efficiency and leads to less biased variance estimations. Simulation studies are conducted using different working covariance matrices in the GEE to investigate finite sample performance of the proposed method, to compare the estimation efficiency, and to explore the performance of different variance estimates in presence of overdispersion. Finally, the proposed method is applied to a real data set from a bladder tumor clinical trial.
Curvilinear bicubic spline fit interpolation scheme
NASA Technical Reports Server (NTRS)
Chi, C.
1973-01-01
Modification of the rectangular bicubic spline fit interpolation scheme so as to make it suitable for use with a polar grid pattern. In the proposed modified scheme the interpolation function is expressed in terms of the radial length and the arc length, and the shape of the patch, which is a wedge or a truncated wedge, is taken into account implicitly. Examples are presented in which the proposed interpolation scheme was used to reproduce the equations of a hemisphere.
Fatigue crack growth monitoring of idealized gearbox spline component using acoustic emission
NASA Astrophysics Data System (ADS)
Zhang, Lu; Ozevin, Didem; Hardman, William; Kessler, Seth; Timmons, Alan
2016-04-01
The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. The acoustic emission (AE) method is a direct way of detecting active flaws; however, the method suffers from the influence of background noise and location/sensor based pattern recognition method. It is important to identify the source mechanism and adapt it to different test conditions and sensors. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method in a laboratory environment. The test sample has the major details of the spline component on a flattened geometry. The AE data is continuously collected together with strain gauges strategically positions on the structure. The fatigue test characteristics are 4 Hz frequency and 0.1 as the ratio of minimum to maximum loading in tensile regime. It is observed that there are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The frequency spectra of continuous emissions and burst emissions are compared to understand the difference of sudden crack growth and gradual crack growth. The predicted crack growth rate is compared with the AE data using the cumulative AE events at the notch tip. The source mechanism of sudden crack growth is obtained solving the inverse mathematical problem from output signal to input signal. The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method The AE data is continuously collected together with strain gauges. There are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The source mechanism of
Classifier calibration using splined empirical probabilities in clinical risk prediction.
Gaudoin, René; Montana, Giovanni; Jones, Simon; Aylin, Paul; Bottle, Alex
2015-06-01
The aims of supervised machine learning (ML) applications fall into three broad categories: classification, ranking, and calibration/probability estimation. Many ML methods and evaluation techniques relate to the first two. Nevertheless, there are many applications where having an accurate probability estimate is of great importance. Deriving accurate probabilities from the output of a ML method is therefore an active area of research, resulting in several methods to turn a ranking into class probability estimates. In this manuscript we present a method, splined empirical probabilities, based on the receiver operating characteristic (ROC) to complement existing algorithms such as isotonic regression. Unlike most other methods it works with a cumulative quantity, the ROC curve, and as such can be tagged onto an ROC analysis with minor effort. On a diverse set of measures of the quality of probability estimates (Hosmer-Lemeshow, Kullback-Leibler divergence, differences in the cumulative distribution function) using simulated and real health care data, our approach compares favourably with the standard calibration method, the pool adjacent violators algorithm used to perform isotonic regression.
Local Refinement of Analysis-Suitable T-splines
2011-03-01
important alternative to traditional engineering de- sign and analysis methodologies . Isogeometric analysis was introduced in [1] and later de- scribed in... surfaces , although the con- cepts generalize to arbitrary odd degree. T-splines of arbitrary degree are discussed in [42, 50]. 2. T-spline fundamentals We...The fundamental object of interest underlying T-spline technology is the T-mesh, denoted by T. For surfaces , the T-mesh is a mesh of polygonal
On the spline-based wavelet differentiation matrix
NASA Technical Reports Server (NTRS)
Jameson, Leland
1993-01-01
The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.
Density Estimation of Simulation Output Using Exponential EPI-Splines
2013-12-01
ak+1,1, k = 1, 2, ..., N − 1. Pointwise Fisher information. We define the pointwise Fisher information of an exponential epi-spline density h at x to...are required to obtain meaningful results. All exponential epi-splines are computed under the assumptions of continuity, smoothness, pointwise Fisher...Kernel 0.4310 0.3536 In the exponential epi-spline estimates, we include continuity, differentiability, and pointwise Fisher information constraints with
Higher-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
The logistic regression originally is intended to explain the relationship between the probability of an event and a set of covariables. The model's coefficients can be interpreted via the odds and odds ratio, which are presented in introduction of the chapter. The observations are possibly got individually, then we speak of binary logistic regression. When they are grouped, the logistic regression is said binomial. In our presentation we mainly focus on the binary case. For statistical inference the main tool is the maximum likelihood methodology: we present the Wald, Rao and likelihoods ratio results and their use to compare nested models. The problems we intend to deal with are essentially the same as in multiple linear regression: testing global effect, individual effect, selection of variables to build a model, measure of the fitness of the model, prediction of new values… . The methods are demonstrated on data sets using R. Finally we briefly consider the binomial case and the situation where we are interested in several events, that is the polytomous (multinomial) logistic regression and the particular case of ordinal logistic regression.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Color management with a hammer: the B-spline fitter
NASA Astrophysics Data System (ADS)
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
Multiresolution Analysis of UTAT B-spline Curves
NASA Astrophysics Data System (ADS)
Lamnii, A.; Mraoui, H.; Sbibih, D.; Zidna, A.
2011-09-01
In this paper, we describe a multiresolution curve representation based on periodic uniform tension algebraic trigonometric (UTAT) spline wavelets of class ??? and order four. Then we determine the decomposition and the reconstruction vectors corresponding to UTAT-spline spaces. Finally, we give some applications in order to illustrate the efficiency of the proposed approach.
Convexity preserving C2 rational quadratic trigonometric spline
NASA Astrophysics Data System (ADS)
Dube, Mridula; Tiwari, Preeti
2012-09-01
A C2 rational quadratic trigonometric spline interpolation has been studied using two kind of rational quadratic trigonometric splines. It is shown that under some natural conditions the solution of the problem exits and is unique. The necessary and sufficient condition that constrain the interpolation curves to be convex in the interpolating interval or subinterval are derived.
Adaptive mesh refinement strategies in isogeometric analysis— A computational comparison
NASA Astrophysics Data System (ADS)
Hennig, Paul; Kästner, Markus; Morgenstern, Philipp; Peterseim, Daniel
2017-04-01
We explain four variants of an adaptive finite element method with cubic splines and compare their performance in simple elliptic model problems. The methods in comparison are Truncated Hierarchical B-splines with two different refinement strategies, T-splines with the refinement strategy introduced by Scott et al. in 2012, and T-splines with an alternative refinement strategy introduced by some of the authors. In four examples, including singular and non-singular problems of linear elasticity and the Poisson problem, the H1-errors of the discrete solutions, the number of degrees of freedom as well as sparsity patterns and condition numbers of the discretized problem are compared.
Chen, Huaihou; Wang, Yuanjia; Paik, Myunghee Cho; Choi, H Alex
2013-10-01
Multilevel functional data is collected in many biomedical studies. For example, in a study of the effect of Nimodipine on patients with subarachnoid hemorrhage (SAH), patients underwent multiple 4-hour treatment cycles. Within each treatment cycle, subjects' vital signs were reported every 10 minutes. This data has a natural multilevel structure with treatment cycles nested within subjects and measurements nested within cycles. Most literature on nonparametric analysis of such multilevel functional data focus on conditional approaches using functional mixed effects models. However, parameters obtained from the conditional models do not have direct interpretations as population average effects. When population effects are of interest, we may employ marginal regression models. In this work, we propose marginal approaches to fit multilevel functional data through penalized spline generalized estimating equation (penalized spline GEE). The procedure is effective for modeling multilevel correlated generalized outcomes as well as continuous outcomes without suffering from numerical difficulties. We provide a variance estimator robust to misspecification of correlation structure. We investigate the large sample properties of the penalized spline GEE estimator with multilevel continuous data and show that the asymptotics falls into two categories. In the small knots scenario, the estimated mean function is asymptotically efficient when the true correlation function is used and the asymptotic bias does not depend on the working correlation matrix. In the large knots scenario, both the asymptotic bias and variance depend on the working correlation. We propose a new method to select the smoothing parameter for penalized spline GEE based on an estimate of the asymptotic mean squared error (MSE). We conduct extensive simulation studies to examine property of the proposed estimator under different correlation structures and sensitivity of the variance estimation to the choice
Reich, Brian J.; Storlie, Curtis B.; Bondell, Howard D.
2009-01-01
With many predictors, choosing an appropriate subset of the covariates is a crucial, and difficult, step in nonparametric regression. We propose a Bayesian nonparametric regression model for curve-fitting and variable selection. We use the smoothing spline ANOVA framework to decompose the regression function into interpretable main effect and interaction functions. Stochastic search variable selection via MCMC sampling is used to search for models that fit the data well. Also, we show that variable selection is highly-sensitive to hyperparameter choice and develop a technique to select hyperparameters that control the long-run false positive rate. The method is used to build an emulator for a complex computer model for two-phase fluid flow. PMID:19789732
[Baseline Correction Algorithm for Raman Spectroscopy Based on Non-Uniform B-Spline].
Fan, Xian-guang; Wang, Hai-tao; Wang, Xin; Xu, Ying-jie; Wang, Xiu-fen; Que, Jing
2016-03-01
As one of the necessary steps for data processing of Raman spectroscopy, baseline correction is commonly used to eliminate the interference of fluorescence spectra. The traditional baseline correction algorithm based on polynomial fitting is simple and easy to implement, but its flexibility is poor due to the uncertain fitting order. In this paper, instead of using polynomial fitting, non-uniform B-spline is proposed to overcome the shortcomings of the traditional method. Based on the advantages of the traditional algorithm, the node vector of non-uniform B-spline is fixed adaptively using the peak position of the original Raman spectrum, and then the baseline is fitted with the fixed order. In order to verify this algorithm, the Raman spectra of parathion-methyl and colza oil are detected and their baselines are corrected using this algorithm, the result is made comparison with two other baseline correction algorithms. The experimental results show that the effect of baseline correction is improved by using this algorithm with a fixed fitting order and less parameters, and there is no over or under fitting phenomenon. Therefore, non-uniform B-spline is proved to be an effective baseline correction algorithm of Raman spectroscopy.
Spline-locking screw fastening strategy
NASA Technical Reports Server (NTRS)
Vranish, John M.
1992-01-01
A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotics or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced space manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.
Spline-Locking Screw Fastening Strategy (SLSFS)
NASA Technical Reports Server (NTRS)
Vranish, John M.
1991-01-01
A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotic or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Fitting multidimensional splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Quartic Box-Spline Reconstruction on the BCC Lattice.
Kim, Minho
2013-02-01
This paper presents an alternative box-spline filter for the body-centered cubic (BCC) lattice, the seven-direction quartic box-spline M7 that has the same approximation order as the eight-direction quintic box-spline M8 but a lower polynomial degree, smaller support, and is computationally more efficient. When applied to reconstruction with quasi-interpolation prefilters, M7 shows less aliasing, which is verified quantitatively by integral filter metrics and frequency error kernels. To visualize and analyze distributional aliasing characteristics, each spectrum is evaluated on the planes and lines with various orientations.
A Simple and Fast Spline Filtering Algorithm for Surface Metrology.
Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei
2015-01-01
Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement.
An Examination of New Paradigms for Spline Approximations.
Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A
2006-01-01
Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.
The algorithms for rational spline interpolation of surfaces
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1986-01-01
Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
Bueno, G.; Ruiz, M.; Sanchez, S
2006-10-04
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
NASA Astrophysics Data System (ADS)
Bueno, G.; Sánchez, S.; Ruiz, M.
2006-10-01
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
A Constrained Spline Estimator of a Hazard Function.
ERIC Educational Resources Information Center
Bloxom, Bruce
1985-01-01
A constrained quadratic spline is proposed as an estimator of the hazard function of a random variable. A maximum penalized likelihood procedure is used to fit the estimator to a sample of psychological response times. (Author/LMO)
Detail view of redwood spline joinery of woodframe section against ...
Detail view of redwood spline joinery of wood-frame section against adobe addition (measuring tape denotes plumb line from center of top board) - First Theatre in California, Southwest corner of Pacific & Scott Streets, Monterey, Monterey County, CA
Construction of spline functions in spreadsheets to smooth experimental data
Technology Transfer Automated Retrieval System (TEKTRAN)
A previous manuscript detailed how spreadsheet software can be programmed to smooth experimental data via cubic splines. This addendum corrects a few errors in the previous manuscript and provides additional necessary programming steps. ...
C1 Hermite shape preserving polynomial splines in R3
NASA Astrophysics Data System (ADS)
Gabrielides, Nikolaos C.
2012-06-01
The C 2 variable degree splines1-3 have been proven to be an efficient tool for solving the curve shape-preserving interpolation problem in two and three dimensions. Based on this representation, the current paper proposes a Hermite interpolation scheme, to construct C 1 shape-preserving splines of variable degree. After this, a slight modification of the method leads to a C 1 shape-preserving Hermite cubic spline. Both methods can easily be developed within a CAD system, since they compute directly (without iterations) the B-spline control polygon. They have been implemented and tested within the DNV Software CAD/CAE system GeniE. [Figure not available: see fulltext.
Volumetric T-spline Construction Using Boolean Operations
2013-07-01
polycube and a surface geometry was presented to construct trivariate T-splines from input triangular meshes. Mapping, subdivision and pillowing techniques...parameterization at the shared boundary. All the detected sharp feature information is preserved in this step. Pillow - ing, smoothing and optimization...mapping, sharp feature preservation, pillowing and quality improvement, handling irregular nodes, trivariate T-spline construction and Bézier
Spline trigonometric bases and their properties
Strelkov, N A
2001-08-31
A family of pairs of biorthonormal systems is constructed such that for each p element of (1,{infinity}) one of these systems is a basis in the space L{sub p}(a,b), while the other is the dual basis in L{sub q}(a,b) (here 1/p+1/q=1). The functions in the first system are products of trigonometric and algebraic polynomials; the functions in the second are products of trigonometric polynomials and the derivatives of B-splines. The asymptotic behaviour of the Lebesgue functions of the constructed systems is investigated. In particular, it is shown that the dominant terms of pointwise asymptotic expansions for the Lebesgue functions have everywhere (except at certain singular points) the form 4/{pi}{sup 2} ln n (that is, the same as in the case of an orthonormal trigonometric system). Interpolation representations with multiple nodes for entire functions of exponential type {sigma} are obtained. These formulae involve a uniform grid; however, by contrast with Kotel'nikov's theorem, where the mesh of the grid is {pi}/{sigma} and decreases as the type of the entire function increases, in the representations obtained the nodes of interpolation can be kept independent of {sigma}, and their multiplicity increases as the type of the interpolated function increases. One possible application of such representations (particularly, their multidimensional analogues) is an effective construction of asymptotically optimal approximation methods by means of scaling and argument shifts of a fixed function (wavelets, grid projection methods, and so on)
Steganalysis using logistic regression
NASA Astrophysics Data System (ADS)
Lubenko, Ivans; Ker, Andrew D.
2011-02-01
We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.
Maternal MCG Interference Cancellation Using Splined Independent Component Subtraction
Yu, Suhong
2011-01-01
Signal distortion is commonly observed when using independent component analysis (ICA) to remove maternal cardiac interference from the fetal magnetocardiogram. This can be seen even in the most conservative case where only the independent components dominated by maternal interference are subtracted from the raw signal, a procedure we refer to as independent component subtraction (ICS). Distortion occurs when the subspaces of the fetal and maternal signals have appreciable overlap. To overcome this problem, we employed splining to remove the fetal signal from the maternal source component. The maternal source components were downsampled and then interpolated to their original sampling rate using a cubic spline. A key aspect of the splining procedure is that the maternal QRS complexes are downsampled much less than the rest of the maternal signal so that they are not distorted, despite their higher bandwidth. The splined maternal source components were projected back onto the magnetic field measurement space and then subtracted from the raw signal. The method was evaluated using data from 24 subjects. We compared the results of conventional, i.e., unsplined, ICS with our method, splined ICS, using matched filtering as a reference. Correlation and subjective assessment of the P-wave and QRS complex were used to assess the performance. Using ICS, we found that the P-wave was adversely affected in 7 of 24 (29%) subjects, all having correlations less than 0.8. Splined ICS showed negligible distortion and improved the signal fidelity to some extent in all subjects. We also demonstrated that maternal T-wave interference could be problematic when the fetal and maternal heartbeats were synchronous. In these instances, splined ICS was more effective than matched filtering. PMID:21712157
An Areal Isotropic Spline Filter for Surface Metrology.
Zhang, Hao; Tong, Mingsi; Chu, Wei
2015-01-01
This paper deals with the application of the spline filter as an areal filter for surface metrology. A profile (2D) filter is often applied in orthogonal directions to yield an areal filter for a three-dimensional (3D) measurement. Unlike the Gaussian filter, the spline filter presents an anisotropic characteristic when used as an areal filter. This disadvantage hampers the wide application of spline filters for evaluation and analysis of areal surface topography. An approximation method is proposed in this paper to overcome the problem. In this method, a profile high-order spline filter serial is constructed to approximate the filtering characteristic of the Gaussian filter. Then an areal filter with isotropic characteristic is composed by implementing the profile spline filter in the orthogonal directions. It is demonstrated that the constructed areal filter has two important features for surface metrology: an isotropic amplitude characteristic and no end effects. Some examples of applying this method on simulated and practical surfaces are analyzed.
Higher Order B-Spline Collocation at the Greville Abscissae
Johnson, Richard Wayne
2005-01-01
Collocation methods are investigated because of their simplicity and inherent efficiency for application to a model problem with similarities to the equations of fluid dynamics. The model problem is a steady, one-dimensional convection-diffusion equation with constant coefficients. The objective of the present research is to compare the efficiency and accuracy of several collocation schemes as applied to the model problem for values of 15 and 50 for the associated Peclet number. The application of standard nodal and orthogonal collocation is compared to the use of the Greville abscissae for the collocation points, in conjunction with cubic and quartic B-splines. The continuity of the B-spline curve solution is varied from C1 continuity for traditional orthogonal collocation of cubic and quartic splines to C2-C3 continuity for cubic and quartic splines employing nodal, orthogonal and Greville point collocation. The application of nodal, one-point orthogonal, and Greville collocation for smoothest quartic B-splines is found to be as accurate as for traditional two-point orthogonal collocation using cubics, while having comparable or better efficiency based on operation count. Greville collocation is more convenient than nodal or 1-point orthogonal collocation because exactly the correct number of collocation points is available.
A kernel representation for exponential splines with global tension
NASA Astrophysics Data System (ADS)
Barendt, Sven; Fischer, Bernd; Modersitzki, Jan
2009-02-01
Interpolation is a key ingredient in many imaging routines. In this note, we present a thorough evaluation of an interpolation method based on exponential splines in tension. They are based on so-called tension parameters, which allow for a tuning of their properties. As it turns out, these interpolants have very many nice features, which are, however, not born out in the literature. We intend to close this gap. We present for the first time an analytic representation of their kernel which enables one to come up with a space and frequency domain analysis. It is shown that the exponential splines in tension, as a function of the tension parameter, bridging the gap between linear and cubic B-Spline interpolation. For example, with a certain tension parameter, one is able to suppress ringing artefacts in the interpolant. On the other hand, the analysis in the frequency domain shows that one derives a superior signal reconstruction quality as known from the cubic B-Spline interpolation, which, however, suffers from ringing artifacts. With the ability to offer a trade-off between opposing features of interpolation methods we advocate the use of the exponential spline in tension from a practical point of view and use the new kernel representation to qualify the trade-off.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Fast simulation of x-ray projections of spline-based surfaces using an append buffer
NASA Astrophysics Data System (ADS)
Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca
2012-10-01
Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, the simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640 × 480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically.
Optimization of aspheric multifocal contact lens by spline curve
NASA Astrophysics Data System (ADS)
Lien, Vu. T.; Chen, Chao-Chang A.; Qiu, Yu-Ting
2016-10-01
This paper presents a solution for design aspheric multifocal contact lens with various add powers. The multi-aspheric curve on the optical surface profile is replaced by a single freeform spline curve. A cubic spline curve is optimized to remove all unsmooth transitions between different vision correction zones and still satisfy the power distribution of the aspheric multifocal contact lens. The result shows that the contact lens using a cubic spline curve could provide not only a smooth lens surface profile but also a smooth power distribution that is difficultly obtained by an aspheric multifocal contact lens. The proposed contact lens is easily transferred to CAD format for further analysis or manufacture. Results of this study can be further applied for progressive contact lens design.
Bidirectional elastic image registration using B-spline affine transformation.
Gu, Suicheng; Meng, Xin; Sciurba, Frank C; Ma, Hongxia; Leader, Joseph; Kaminski, Naftali; Gur, David; Pu, Jiantao
2014-06-01
A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bidirectional instead of the traditional unidirectional objective/cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy.
Bidirectional Elastic Image Registration Using B-Spline Affine Transformation
Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao
2014-01-01
A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210
Simple spline-function equations for fracture mechanics calculations
NASA Technical Reports Server (NTRS)
Orange, T. W.
1979-01-01
The paper presents simple spline-function equations for fracture mechanics calculations. A spline function is a sequence of piecewise polynomials of degree n greater than 1 whose coefficients are such that the function and its first n-1 derivatives are continuous. Second-degree spline equations are presented for the compact, three point bend, and crack-line wedge-loaded specimens. Some expressions can be used directly, so that for a cyclic crack propagation test using a compact specimen, the equation given allows the cracklength to be calculated from the slope of the load-displacement curve. For an R-curve test, equations allow the crack length and stress intensity factor to be calculated from the displacement and the displacement ratio.
Error bounded conic spline approximation for NC code
NASA Astrophysics Data System (ADS)
Shen, Liyong
2012-01-01
Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.
Error bounded conic spline approximation for NC code
NASA Astrophysics Data System (ADS)
Shen, Liyong
2011-12-01
Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.
A cubic spline approximation for problems in fluid mechanics
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Graves, R. A., Jr.
1975-01-01
A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.
BOX SPLINE BASED 3D TOMOGRAPHIC RECONSTRUCTION OF DIFFUSION PROPAGATORS FROM MRI DATA.
Ye, Wenxing; Portnoy, Sharon; Entezari, Alireza; Vemuri, Baba C; Blackband, Stephen J
2011-06-09
This paper introduces a tomographic approach for reconstruction of diffusion propagators, P( r ), in a box spline framework. Box splines are chosen as basis functions for high-order approximation of P( r ) from the diffusion signal. Box splines are a generalization of B-splines to multivariate setting that are particularly useful in the context of tomographic reconstruction. The X-Ray or Radon transform of a (tensor-product B-spline or a non-separable) box spline is a box spline - the space of box splines is closed under the Radon transform.We present synthetic and real multi-shell diffusion-weighted MR data experiments that demonstrate the increased accuracy of P( r ) reconstruction as the order of basis functions is increased.
AnL1 smoothing spline algorithm with cross validation
NASA Astrophysics Data System (ADS)
Bosworth, Ken W.; Lall, Upmanu
1993-08-01
We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.
Regression-based oxides of nitrogen predictors for three diesel engine technologies.
Chen, Xiaohan; Schmid, Natalia A; Wang, Lijuan; Clark, Nigel N
2010-01-01
Models of diesel engine emissions such as oxides of nitrogen (NO(x)) are valuable when they can predict instantaneous values because they can be incorporated into whole vehicle models, support inventory predictions, and assist in developing superior engine and aftertreatment control strategies. Recent model-year diesel engines using multiple injection strategies, exhaust gas recirculation, and variable geometry turbocharging may have more transient sensitivity and demand more sophisticated modeling than for legacy engines. Emissions data from 1992, 1999, and 2004 model-year U.S. truck engines were modeled separately using a linear approach (with transient terms) and multivariate adaptive regression splines (MARS), an adaptive piece-wise regression approach that has limited prior use for emissions prediction. Six input variables based on torque, speed, power, and their derivatives were used for MARS. Emissions time delay was considered for both models. Manifold air temperature (MAT) and manifold air pressure (MAP) were further used in NO(x) modeling to build a plug-in model. The predictive performance for instantaneous NO(x) on part of the certification transient test procedure (Federal Test Procedure [FTP]) of the 2004 engine MARS was lower (R2 = 0.949) than the performance for the 1992 (R2 = 0.981) and 1999 (R2 = 0.988) engines. Linear regression performed similarly for the 1992 and 1999 engines but performed poorly (R2 = 0.896) for the 2004 engine. The MARS performance varied substantially when data from different cycles were used. Overall, the MAP and MAT plug-in model trained by MARS was the best, but the performance differences between LR and MARS were not substantial.
How to fly an aircraft with control theory and splines
NASA Technical Reports Server (NTRS)
Karlsson, Anders
1994-01-01
When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.
B-spline based image tracking by detection
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman
2016-05-01
Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.
Near minimally normed spline quasi-interpolants on uniform partitions
NASA Astrophysics Data System (ADS)
Barrera, D.; Ibanez, M. J.; Sablonniere, P.; Sbibih, D.
2005-09-01
Spline quasi-interpolants (QIs) are local approximating operators for functions or discrete data. We consider the construction of discrete and integral spline QIs on uniform partitions of the real line having small infinity norms. We call them near minimally normed QIs: they are exact on polynomial spaces and minimize a simple upper bound of their infinity norms. We give precise results for cubic and quintic QIs. Also the QI error is considered, as well as the advantage that these QIs present when approximating functions with isolated discontinuities.
ERIC Educational Resources Information Center
Pedrini, D. T.; Pedrini, Bonnie C.
Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…
Regressive evolution in Astyanax cavefish.
Jeffery, William R
2009-01-01
A diverse group of animals, including members of most major phyla, have adapted to life in the perpetual darkness of caves. These animals are united by the convergence of two regressive phenotypes, loss of eyes and pigmentation. The mechanisms of regressive evolution are poorly understood. The teleost Astyanax mexicanus is of special significance in studies of regressive evolution in cave animals. This species includes an ancestral surface dwelling form and many con-specific cave-dwelling forms, some of which have evolved their recessive phenotypes independently. Recent advances in Astyanax development and genetics have provided new information about how eyes and pigment are lost during cavefish evolution; namely, they have revealed some of the molecular and cellular mechanisms involved in trait modification, the number and identity of the underlying genes and mutations, the molecular basis of parallel evolution, and the evolutionary forces driving adaptation to the cave environment.
Use of splines in the solution of parabolized Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Lyttle, Ian; Reed, Helen
1996-11-01
A parabolized Navier-Stokes code is written to investigate the three-dimensional nature of boundary layers. The geometry of interest is a sharp cone, of elliptical cross-section, at zero angle-of-attack. The flow of interest is a calorically perfect ideal gas at free-stream Mach number of 4 and freestream Reynolds number of 4 × 10^6 per meter. The use of cubic splines with an adaptive grid scheme is found to induce small errors in pressure. Though large scale flow features remain unaffected, spurious small scale features can appear. The nature of these errors is investigated. As the solution is transferred between grids, splined quantities are used to reconstruct other quantities through the ideal gas relations. Non-physical oscillations appear in the reconstructed quantities. These oscillations contaminate the solution at small scales. This work is supported by the Air Force Office of Scientific Research (F49620-95-1-0033), and by the National Science Foundation Faculty Awards for Women in Science and Engineering (GER-9022523).
A Fourth-Order Spline Collocation Approach for the Solution of a Boundary Layer Problem
NASA Astrophysics Data System (ADS)
Sayfy, Khoury, S.
2011-09-01
A finite element approach, based on cubic B-spline collocation, is presented for the numerical solution of a class of singularly perturbed two-point boundary value problems that possess a boundary layer at one or two end points. Due to the existence of a layer, the problem is handled using an adaptive spline collocation approach constructed over a non-uniform Shishkin-like meshes, defined via a carefully selected generating function. To tackle the case of nonlinearity, if it exists, an iterative scheme arising from Newton's method is employed. The rate of convergence is verified to be of fourth-order and is calculated using the double-mesh principle. The efficiency and applicability of the method are demonstrated by applying it to a number of linear and nonlinear examples. The numerical solutions are compared with both analytical and other existing numerical solutions in the literature. The numerical results confirm that this method is superior when contrasted with other accessible approaches and yields more accurate solutions.
Spline collocation method for linear singular hyperbolic systems
NASA Astrophysics Data System (ADS)
Gaidomak, S. V.
2008-07-01
Some classes of singular systems of partial differential equations with variable matrix coefficients and internal hyperbolic structure are considered. The spline collocation method is used to numerically solve such systems. Sufficient conditions for the convergence of the numerical procedure are obtained. Numerical results are presented.
Thin-plate spline quadrature of geodetic integrals
NASA Technical Reports Server (NTRS)
Vangysen, Herman
1989-01-01
Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.
Radial Splines Would Prevent Rotation Of Bearing Race
NASA Technical Reports Server (NTRS)
Kaplan, Ronald M.; Chokshi, Jaisukhlal V.
1993-01-01
Interlocking fine-pitch ribs and grooves formed on otherwise flat mating end faces of housing and outer race of rolling-element bearing to be mounted in housing, according to proposal. Splines bear large torque loads and impose minimal distortion on raceway.
Connecting the Dots Parametrically: An Alternative to Cubic Splines.
ERIC Educational Resources Information Center
Hildebrand, Wilbur J.
1990-01-01
Discusses a method of cubic splines to determine a curve through a series of points and a second method for obtaining parametric equations for a smooth curve that passes through a sequence of points. Procedures for determining the curves and results of each of the methods are compared. (YP)
Analysis of Pairwise Preference Data Using Integrated B-SPLINES.
ERIC Educational Resources Information Center
Winsberg, Suzanne; Ramsay, James O.
1981-01-01
A general method of scaling pairwise preference data is presented that may be used without prior knowledge about the nature of the relationship between an observation and the process giving rise to it. The method involves a monotone transformation and is similar to the B-SPLINE approach. (Author/JKS)
On the Permanence Property in Spherical Spline Interpolation,
1982-11-01
asoects. Bollettino di Geodesia e scienze affini, vol. 1, 105 - 120. Freeden. W., Reuter, R. (1982): Spherical harmonic splines. Methoden und Verfahren der...least squares problems. Ballettino di Geodesia e scienze affini, vol. XXXV, No. 1, 181 - 210. Meiss], P. (1981): The use of finite elements in Physical
Cubic generalized B-splines for interpolation and nonlinear filtering of images
NASA Astrophysics Data System (ADS)
Tshughuryan, Heghine
1997-04-01
This paper presents the introduction and using of the generalized or parametric B-splines, namely the cubic generalized B-splines, in various signal processing applications. The theory of generalized B-splines is briefly reviewed and also some important properties of generalized B-splines are investigated. In this paper it is shown the use of generalized B-splines as a tool to solve the quasioptimal algorithm problem for nonlinear filtering. Finally, the experimental results are presented for oscillatory and other signals and images.
Free-form deformation using lower-order B-spline for nonrigid image registration.
Sun, Wei; Niessen, Wiro J; Klein, Stefan
2014-01-01
In traditional free-form deformation (FFD) based registration, a B-spline basis function is commonly utilized to build the transformation model. As the B-spline order increases, the corresponding B-spline function becomes smoother. However, the higher-order B-spline has a larger support region, which means higher computational cost. For a given D-dimensional nth-order B-spline, an mth-order B-spline where (m < or = n) has (m +1/n + 1)D times lower computational complexity. Generally, the third-order B-spline is regarded as keeping a good balance between smoothness and computation time. A lower-order function is seldom used to construct the deformation field for registration since it is less smooth. In this research, we investigated whether lower-order B-spline functions can be utilized for efficient registration, by using a novel stochastic perturbation technique in combination with a postponed smoothing technique to higher B-spline order. Experiments were performed with 3D lung and brain scans, demonstrating that the lower-order B-spline FFD in combination with the proposed perturbation and postponed smoothing techniques even results in better accuracy and smoothness than the traditional third-order B-spline registration, while substantially reducing computational costs.
Abbas, A A; Guo, X; Tan, W H; Jalab, H A
2014-08-01
In a computerized image analysis environment, the irregularity of a lesion border has been used to differentiate between malignant melanoma and other pigmented skin lesions. The accuracy of the automated lesion border detection is a significant step towards accurate classification at a later stage. In this paper, we propose the use of a combined Spline and B-spline in order to enhance the quality of dermoscopic images before segmentation. In this paper, morphological operations and median filter were used first to remove noise from the original image during pre-processing. Then we proceeded to adjust image RGB values to the optimal color channel (green channel). The combined Spline and B-spline method was subsequently adopted to enhance the image before segmentation. The lesion segmentation was completed based on threshold value empirically obtained using the optimal color channel. Finally, morphological operations were utilized to merge the smaller regions with the main lesion region. Improvement on the average segmentation accuracy was observed in the experimental results conducted on 70 dermoscopic images. The average accuracy of segmentation achieved in this paper was 97.21 % (where, the average sensitivity and specificity were 94 % and 98.05 % respectively).
ERIC Educational Resources Information Center
Walton, Joseph M.; And Others
1978-01-01
Ridge regression is an approach to the problem of large standard errors of regression estimates of intercorrelated regressors. The effect of ridge regression on the estimated squared multiple correlation coefficient is discussed and illustrated. (JKS)
Numerical solution of the Black-Scholes equation using cubic spline wavelets
NASA Astrophysics Data System (ADS)
Černá, Dana
2016-12-01
The Black-Scholes equation is used in financial mathematics for computation of market values of options at a given time. We use the θ-scheme for time discretization and an adaptive scheme based on wavelets for discretization on the given time level. Advantages of the proposed method are small number of degrees of freedom, high-order accuracy with respect to variables representing prices and relatively small number of iterations needed to resolve the problem with a desired accuracy. We use several cubic spline wavelet and multi-wavelet bases and discuss their advantages and disadvantages. We also compare an isotropic and anisotropic approach. Numerical experiments are presented for the two-dimensional Black-Scholes equation.
Soldea, Octavian; Elber, Gershon; Rivlin, Ehud
2006-02-01
This paper presents a method to globally segment volumetric images into regions that contain convex or concave (elliptic) iso-surfaces, planar or cylindrical (parabolic) iso-surfaces, and volumetric regions with saddle-like (hyperbolic) iso-surfaces, regardless of the value of the iso-surface level. The proposed scheme relies on a novel approach to globally compute, bound, and analyze the Gaussian and mean curvatures of an entire volumetric data set, using a trivariate B-spline volumetric representation. This scheme derives a new differential scalar field for a given volumetric scalar field, which could easily be adapted to other differential properties. Moreover, this scheme can set the basis for more precise and accurate segmentation of data sets targeting the identification of primitive parts. Since the proposed scheme employs piecewise continuous functions, it is precise and insensitive to aliasing.
On the role of exponential splines in image interpolation.
Kirshner, Hagai; Porat, Moshe
2009-10-01
A Sobolev reproducing-kernel Hilbert space approach to image interpolation is introduced. The underlying kernels are exponential functions and are related to stochastic autoregressive image modeling. The corresponding image interpolants can be implemented effectively using compactly-supported exponential B-splines. A tight l(2) upper-bound on the interpolation error is then derived, suggesting that the proposed exponential functions are optimal in this regard. Experimental results indicate that the proposed interpolation approach with properly-tuned, signal-dependent weights outperforms currently available polynomial B-spline models of comparable order. Furthermore, a unified approach to image interpolation by ideal and nonideal sampling procedures is derived, suggesting that the proposed exponential kernels may have a significant role in image modeling as well. Our conclusion is that the proposed Sobolev-based approach could be instrumental and a preferred alternative in many interpolation tasks.
Spline function approximation for velocimeter Doppler frequency measurement
NASA Technical Reports Server (NTRS)
Savakis, Andreas E.; Stoughton, John W.; Kanetkar, Sharad V.
1989-01-01
A spline function approximation approach for measuring the Doppler spectral peak frequency in a laser Doppler velocimeter system is presented. The processor is designed for signal bursts with mean Doppler shift frequencies up to 100 MHz, input turbulence up to 20 percent, and photon counts as low as 300. The frequency-domain processor uses a bank of digital bandpass filters for the capture of the energy spectrum of each signal burst. The average values of the filter output energies, as a function of normalized frequency, are modeled as deterministic spline functions which are linearly weighted to evaluate the spectral peak location associated with the Doppler shift. The weighting coefficients are chosen to minimize the mean square error. Performance evaluation by simulation yields average errors in estimating mean Doppler frequencies within 0.5 percent for poor signal-to-noise conditions associated with a low photon count of 300 photons/burst.
Data approximation using a blending type spline construction
Dalmo, Rune; Bratlie, Jostein
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
High-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1975-01-01
The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.
Analysis of harmonic spline gravity models for Venus and Mars
NASA Technical Reports Server (NTRS)
Bowin, Carl
1986-01-01
Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.
Regional Ionosphere Mapping with Kriging and B-spline Methods
NASA Astrophysics Data System (ADS)
Grynyshyna-Poliuga, O.; Stanislawska, I. M.
2013-12-01
This work demonstrates the concept and practical examples of mapping of regional ionosphere, based on GPS observations from the EGNOS Ranging and Integrity Monitoring Stations (RIMS) network and permanent stations near to them. Interpolation/prediction techniques, such as kriging (KR) and the cubic B-spline, which are suitable for handling multi-scale phenomena and unevenly distributed data, were used to create total electron content (TEC) maps. Their computational efficiency (especially the B-spline) and the ability to handle undersampled data (especially kriging) are particularly attractive. The data sets have been collect into seasonal bins representing June, December solstices and equinox (March, September). TEC maps have a spatial resolution of 2.50 and 2.50 in latitude and longitude, respectively, and a 15-minutes temporal resolution. The time series of the TEC maps can be used to derive average monthly maps describing major ionospheric trends as a function of time, season, and spatial location.
From Data to Assessments and Decisions: Epi-Spline Technology
2014-05-08
studies , additional information derives from a much wider range of sources. Data is never obtained in a vacuum, but rather in a context that provides...examine whether the solutions of the approximate problems are indeed approximations of solutions of the actual problem, study as- sociated convergence... studies of this area, especially in higher dimensions. 4 Background on Epi-Convergence The examination of epi-splines and the relationship between the
Uncertainty Quantification using Epi-Splines and Soft Information
2012-06-01
prediction of the behavior of constructed models of phenomena in physics, 1 biology, chemistry , ecology, engineered sytems, politics, etc. ... Results...soft information is often more qualitative in nature coming from a human understanding of characteristics of the system output. Engineered systems are... engineering column example illustrates the ability of the epi-spline framework to perform well under a more complex system function where we have
A B-spline Galerkin method for the Dirac equation
NASA Astrophysics Data System (ADS)
Froese Fischer, Charlotte; Zatsarinny, Oleg
2009-06-01
The B-spline Galerkin method is first investigated for the simple eigenvalue problem, y=-λy, that can also be written as a pair of first-order equations y=λz, z=-λy. Expanding both y(r) and z(r) in the B basis results in many spurious solutions such as those observed for the Dirac equation. However, when y(r) is expanded in the B basis and z(r) in the dB/dr basis, solutions of the well-behaved second-order differential equation are obtained. From this analysis, we propose a stable method ( B,B) basis for the Dirac equation and evaluate its accuracy by comparing the computed and exact R-matrix for a wide range of nuclear charges Z and angular quantum numbers κ. When splines of the same order are used, many spurious solutions are found whereas none are found for splines of different order. Excellent agreement is obtained for the R-matrix and energies for bound states for low values of Z. For high Z, accuracy requires the use of a grid with many points near the nucleus. We demonstrate the accuracy of the bound-state wavefunctions by comparing integrals arising in hyperfine interaction matrix elements with exact analytic expressions. We also show that the Thomas-Reiche-Kuhn sum rule is not a good measure of the quality of the solutions obtained by the B-spline Galerkin method whereas the R-matrix is very sensitive to the appearance of pseudo-states.
Registration of multiple image sets with thin-plate spline
NASA Astrophysics Data System (ADS)
He, Liang; Houk, James C.
1994-09-01
A thin-plate spline method for spatial warping was used to register multiple image sets during 3D reconstruction of histological sections. In a neuroanatomical study, the same labeling method was applied to several turtle brains. Each case produced a set of microscopic sections. Spatial warping was employed to map data sets from multiple cases onto a template coordinate system. This technique enabled us to produce an anatomical reconstruction of a neural network that controls limb movement.
Control theory and splines, applied to signature storage
NASA Technical Reports Server (NTRS)
Enqvist, Per
1994-01-01
In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.
Explicit B-spline regularization in diffeomorphic image registration
Tustison, Nicholas J.; Avants, Brian B.
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140
Explicit B-spline regularization in diffeomorphic image registration.
Tustison, Nicholas J; Avants, Brian B
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools.
ANALYSIS ON CENSORED QUANTILE RESIDUAL LIFE MODEL VIA SPLINE SMOOTHING.
Ma, Yanyuan; Wei, Ying
2012-01-01
We propose a general class of quantile residual life models, where a specific quantile of the residual life time, conditional on an individual has survived up to time t, is a function of certain covariates with their coefficients varying over time. The varying coefficients are assumed to be smooth unspecified functions of t. We propose to estimate the coefficient functions using spline approximation. Incorporating the spline representation directly into a set of unbiased estimating equations, we obtain a one-step estimation procedure, and we show that this leads to a uniformly consistent estimator. To obtain further computational simplification, we propose a two-step estimation approach in which we estimate the coefficients on a series of time points first, and follow this with spline smoothing. We compare the two methods in terms of their asymptotic efficiency and computational complexity. We further develop inference tools to test the significance of the covariate effect on residual life. The finite sample performance of the estimation and testing procedures are further illustrated through numerical experiments. We also apply the methods to a data set from a neurological study.
Fast space-variant elliptical filtering using box splines.
Chaudhury, Kunal Narayan; Munoz-Barrutia, Arrate; Unser, Michael
2010-09-01
The efficient realization of linear space-variant (non-convolution) filters is a challenging computational problem in image processing. In this paper, we demonstrate that it is possible to filter an image with a Gaussian-like elliptic window of varying size, elongation and orientation using a fixed number of computations per pixel. The associated algorithm, which is based upon a family of smooth compactly supported piecewise polynomials, the radially-uniform box splines, is realized using preintegration and local finite-differences. The radially-uniform box splines are constructed through the repeated convolution of a fixed number of box distributions, which have been suitably scaled and distributed radially in an uniform fashion. The attractive features of these box splines are their asymptotic behavior, their simple covariance structure, and their quasi-separability. They converge to Gaussians with the increase of their order, and are used to approximate anisotropic Gaussians of varying covariance simply by controlling the scales of the constituent box distributions. Based upon the second feature, we develop a technique for continuously controlling the size, elongation and orientation of these Gaussian-like functions. Finally, the quasi-separable structure, along with a certain scaling property of box distributions, is used to efficiently realize the associated space-variant elliptical filtering, which requires O(1) computations per pixel irrespective of the shape and size of the filter.
Fast and stable evaluation of box-splines via the BB-form
NASA Astrophysics Data System (ADS)
Kim, Minho; Peters, Jörg
2009-04-01
To repeatedly evaluate linear combinations of box-splines in a fast and stable way, in particular along knot planes, the box-spline is converted to and tabulated as piecewise polynomial in BB-form (Bernstein-Bézier-form). We show that the BB-coefficients can be derived and stored as integers plus a rational scale factor and derive a hash table for efficiently accessing the polynomial pieces. This pre-processing, the resulting evaluation algorithm and use in a widely-used ray-tracing package are illustrated for splines based on two trivariate box-splines: the seven-directional box-spline on the Cartesian lattice and the six-directional box-spline on the face-centered cubic lattice.
An improved algorithm of three B-spline curve interpolation and simulation
NASA Astrophysics Data System (ADS)
Zhang, Wanjun; Xu, Dongmei; Meng, Xinhong; Zhang, Feng
2017-03-01
As a key interpolation technique in CNC system machine tool, three B-spline curve interpolator has been proposed to change the drawbacks caused by linear and circular interpolator, Such as interpolation time bigger, three B-spline curves step error are not easy changed,and so on. This paper an improved algorithm of three B-spline curve interpolation and simulation is proposed. By Using MATALAB 7.0 computer soft in three B-spline curve interpolation is developed for verifying the proposed modification algorithm of three B-spline curve interpolation experimentally. The simulation results show that the algorithm is correct; it is consistent with a three B-spline curve interpolation requirements.
NASA Astrophysics Data System (ADS)
Souto, Nelson; Thuillier, Sandrine; Andrade-Campos, A.
2016-10-01
Nowadays, full-field measurement methods are largely used to acquire the strain field developed by heterogeneous mechanical tests. Recent material parameters identification strategies based on a single heterogeneous test have been proposed considering that an inhomogeneous strain field can lead to a more complete mechanical characterization of the sheet metals. The purpose of this work is the design of a heterogeneous test promoting an enhanced mechanical behavior characterization of thin metallic sheets, under several strain paths and strain amplitudes. To achieve this goal, a design optimization strategy finding the appropriate specimen shape of the heterogeneous test by using either B-Splines or cubic splines was developed. The influence of using approximation or interpolation curves, respectively, was investigated in order to determine the most effective approach for achieving a better shape design. The optimization process is guided by an indicator criterion which evaluates, quantitatively, the strain field information provided by the mechanical test. Moreover, the design of the heterogeneous test is based on the resemblance with the experimental reality, since a rigid tool leading to uniaxial loading path is used for applying the displacement in a similar way as universal standard testing machines. The results obtained reveal that the optimization strategy using B-Splines curve approximation led to a heterogeneous test providing larger strain field information for characterizing the mechanical behavior of sheet metals.
XRA image segmentation using regression
NASA Astrophysics Data System (ADS)
Jin, Jesse S.
1996-04-01
Segmentation is an important step in image analysis. Thresholding is one of the most important approaches. There are several difficulties in segmentation, such as automatic selecting threshold, dealing with intensity distortion and noise removal. We have developed an adaptive segmentation scheme by applying the Central Limit Theorem in regression. A Gaussian regression is used to separate the distribution of background from foreground in a single peak histogram. The separation will help to automatically determine the threshold. A small 3 by 3 widow is applied and the modal of the local histogram is used to overcome noise. Thresholding is based on local weighting, where regression is used again for parameter estimation. A connectivity test is applied to the final results to remove impulse noise. We have applied the algorithm to x-ray angiogram images to extract brain arteries. The algorithm works well for single peak distribution where there is no valley in the histogram. The regression provides a method to apply knowledge in clustering. Extending regression for multiple-level segmentation needs further investigation.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
A box spline calculus for the discretization of computed tomography reconstruction problems.
Entezari, Alireza; Nilchian, Masih; Unser, Michael
2012-08-01
B-splines are attractive basis functions for the continuous-domain representation of biomedical images and volumes. In this paper, we prove that the extended family of box splines are closed under the Radon transform and derive explicit formulae for their transforms. Our results are general; they cover all known brands of compactly-supported box splines (tensor-product B-splines, separable or not) in any number of dimensions. The proposed box spline approach extends to non-Cartesian lattices used for discretizing the image space. In particular, we prove that the 2-D Radon transform of an N-direction box spline is generally a (nonuniform) polynomial spline of degree N-1. The proposed framework allows for a proper discretization of a variety of tomographic reconstruction problems in a box spline basis. It is of relevance for imaging modalities such as X-ray computed tomography and cryo-electron microscopy. We provide experimental results that demonstrate the practical advantages of the box spline formulation for improving the quality and efficiency of tomographic reconstruction algorithms.
Grigorenko, Ya.M.; Kryukov, N.N.; Ivanova, Yu.I.
1995-10-01
Spline functions have come into increasingly wide use recently in the solution of boundary-value problems of the theory of elasticity of plates and shells. This development stems from the advantages offered by spline approximations compared to other methods. Among the most important advantages are the following: (1) the behavior of the spline in the neighborhood of a point has no effect on the behavior of the spline as a whole; (2) spline interpolation converges well compared to polynomial interpolation; (3) algorithms for spline construction are simple and convenient to use. The use of spline functions to solve linear two-dimensional problems on the stress-strain state of shallow shells and plates that are rectangular in plan has proven their efficiency and made it possible to expand the range of problems that can be solved. The approach proposed in these investigations is based on reducing a linear two-dimensional problem to a unidimensional problem by the spline unidimensional problem by the method of discrete orthogonalization in the other coordinate direction. Such an approach makes it possible to account for local and edge effects in the stress state of plates and shells and obtain reliable solutions with complex boundary conditions. In the present study, we take the above approach, employing spline functions to solve linear problems, and use it to also solve geometrically nonlinear problems of the statics of shallow shells and plates with variable parameters.
NASA Technical Reports Server (NTRS)
Stacy, J. E.
1984-01-01
Asymmetric spline surfaces appear useful for the design of high-quality general optical systems (systems without symmetries). A spline influence function defined as the actual surface resulting from a simple perturbation in the spline definition array shows that a subarea is independent of others four or more points away. Optimization methods presented in this paper are used to vary a reflective spline surface near the focal plane of a decentered Schmidt-Cassegrain to reduce rms spot radii by a factor of 3 across the field.
Regressive systemic sclerosis.
Black, C; Dieppe, P; Huskisson, T; Hart, F D
1986-01-01
Systemic sclerosis is a disease which usually progresses or reaches a plateau with persistence of symptoms and signs. Regression is extremely unusual. Four cases of established scleroderma are described in which regression is well documented. The significance of this observation and possible mechanisms of disease regression are discussed. Images PMID:3718012
Tharrington, Arnold N.
2015-09-09
The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
NASA Astrophysics Data System (ADS)
Bejancu, Aurelian
2006-12-01
This paper considers the problem of interpolation on a semi-plane grid from a space of box-splines on the three-direction mesh. Building on a new treatment of univariate semi-cardinal interpolation for natural cubic splines, the solution is obtained as a Lagrange series with suitable localization and polynomial reproduction properties. It is proved that the extension of the natural boundary conditions to box-spline semi-cardinal interpolation attains half of the approximation order of the cardinal case.
Linsen, L; Pascucci, V; Duchaineau, M A; Hamann, B; Joy, K I
2002-11-19
Multiresolution methods for representing data at multiple levels of detail are widely used for large-scale two- and three-dimensional data sets. We present a four-dimensional multiresolution approach for time-varying volume data. This approach supports a hierarchy with spatial and temporal scalability. The hierarchical data organization is based on 4{radical}2 subdivision. The n{radical}2-subdivision scheme only doubles the overall number of grid points in each subdivision step. This fact leads to fine granularity and high adaptivity, which is especially desirable in the spatial dimensions. For high-quality data approximation on each level of detail, we use quadrilinear B-spline wavelets. We present a linear B-spline wavelet lighting scheme based on n{radical}2 subdivision to obtain narrow masks for the update rules. Narrow masks provide a basis for out-of-core data exploration techniques and view-dependent visualization of sequences of time steps.
Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé
2016-01-01
Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418
Thin-plate spline analysis of mandibular growth.
Franchi, L; Baccetti, T; McNamara, J A
2001-04-01
The analysis of mandibular growth changes around the pubertal spurt in humans has several important implications for the diagnosis and orthopedic correction of skeletal disharmonies. The purpose of this study was to evaluate mandibular shape and size growth changes around the pubertal spurt in a longitudinal sample of subjects with normal occlusion by means of an appropriate morphometric technique (thin-plate spline analysis). Ten mandibular landmarks were identified on lateral cephalograms of 29 subjects at 6 different developmental phases. The 6 phases corresponded to 6 different maturational stages in cervical vertebrae during accelerative and decelerative phases of the pubertal growth curve of the mandible. Differences in shape between average mandibular configurations at the 6 developmental stages were visualized by means of thin-plate spline analysis and subjected to permutation test. Centroid size was used as the measure of the geometric size of each mandibular specimen. Differences in size at the 6 developmental phases were tested statistically. The results of graphical analysis indicated a statistically significant change in mandibular shape only for the growth interval from stage 3 to stage 4 in cervical vertebral maturation. Significant increases in centroid size were found at all developmental phases, with evidence of a prepubertal minimum and of a pubertal maximum. The existence of a pubertal peak in human mandibular growth, therefore, is confirmed by thin-plate spline analysis. Significant morphological changes in the mandible during the growth interval from stage 3 to stage 4 in cervical vertebral maturation may be described as an upward-forward direction of condylar growth determining an overall "shrinkage" of the mandibular configuration along the measurement of total mandibular length. This biological mechanism is particularly efficient in compensating for major increments in mandibular size at the adolescent spurt.
A numerical optimization approach to generate smoothing spherical splines
NASA Astrophysics Data System (ADS)
Machado, L.; Monteiro, M. Teresa T.
2017-01-01
Approximating data in curved spaces is a common procedure that is extremely required by modern applications arising, for instance, in aerospace and robotics industries. Here, we are particularly interested in finding smoothing cubic splines that best fit given data in the Euclidean sphere. To achieve this aim, a least squares optimization problem based on the minimization of a certain cost functional is formulated. To solve the problem a numerical algorithm is implemented using several routines from MATLAB toolboxes. The proposed algorithm is shown to be easy to implement, very accurate and precise for spherical data chosen randomly.
Use of tensor product splines in magnet optimization
Davey, K.R. )
1999-05-01
Variational Metrics and other direct search techniques have proved useful in magnetic optimization. At least one technique used in magnetic optimization is to first fit the data of the desired optimization parameter to the data. If this fit is smoothly differentiable, a number of powerful techniques become available for the optimization. The author shows the usefulness of tensor product splines in accomplishing this end. Proper choice of augmented knot placement not only makes the fit very accurate, but allows for differentiation. Thus the gradients required with direct optimization in divariate and trivariate applications are robustly generated.
An Executive System for Modeling with Rational B-Splines
1989-05-01
capabilities; and, isophote line calculation Curve modules interfaced include entering and editing points in the parametric space of a B-spline surface...5.2.3.3.7 ISOPHOTES ........................ 81 5.2.3.3.7.1 SET NUMBER .................. 81 5.2.3.3.7.2 READ ISOPHOTE ............... 81 5.2.3.3.7.3...CALCULATE ISOPHOTES .......... 81 5.2.3.3.7.4 SHOW ISOPHOTES ................ 81 5.2.3.3.8 REFLECTION LINES ................. 82 5.2.3.3.8.1 SET
Achieving high data reduction with integral cubic B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.
1993-01-01
During geometry processing, tangent directions at the data points are frequently readily available from the computation process that generates the points. It is desirable to utilize this information to improve the accuracy of curve fitting and to improve data reduction. This paper presents a curve fitting method which utilizes both position and tangent direction data. This method produces G(exp 1) non-rational B-spline curves. From the examples, the method demonstrates very good data reduction rates while maintaining high accuracy in both position and tangent direction.
n-dimensional non uniform rational b-splines for metamodeling
Turner, Cameron J; Crawford, Richard H
2008-01-01
Non Uniform Rational B-splines (NURBs) have unique properties that make them attractive for engineering metamodeling applications. NURBs are known to accurately model many different continuous curve and surface topologies in 1- and 2-variate spaces. However, engineering metamodels of the design space often require hypervariate representations of multidimensional outputs. In essence, design space metamodels are hyperdimensional constructs with a dimensionality determined by their input and output variables. To use NURBs as the basis for a metamodel in a hyperdimensional space, traditional geometric fitting techniques must be adapted to hypervariate and hyperdimensional spaces composed of both continuous and discontinuous variable types. In this paper, they describe the necessary adaptations for the development of a NURBs-based metamodel called a Hyperdimensional Performance Model or HyPerModel. HyPerModels are capable of accurately and reliably modeling nonlinear hyperdimensional objects defined by both continuous and discontinuous variables of a wide variety of topologies, such as those that define typical engineering design spaces. They demonstrate this ability by successfully generating accurate HyPerModels of 10 trial functions laying the foundation for future work with N-dimensional NURBs in design space applications.
N-dimensional non uniform rational B-splines for metamodeling
Turner, Cameron J; Crawford, Richard H
2008-01-01
Non Uniform Rational B-splines (NURBs) have unique properties that make them attractive for engineering metamodeling applications. NURBs are known to accurately model many different continuous curve and surface topologies in 1-and 2-variate spaces. However, engineering metamodels of the design space often require hypervariate representations of multidimensional outputs. In essence, design space metamodels are hyperdimensional constructs with a dimensionality determined by their input and output variables. To use NURBs as the basis for a metamodel in a hyperdimensional space, traditional geometric fitting techniques must be adapted to hypervariate and hyperdimensional spaces composed of both continuous and discontinuous variable types. In this paper, we describe the necessary adaptations for the development of a NURBs-based metamodel called a Hyperdimensional Performance Model or HyPerModel. HyPerModels are capable of accurately and reliably modeling nonlinear hyperdimensional objects defined by both continuous and discontinuous variables of a wide variety of topologies, such as those that define typical engineering design spaces. We demonstrate this ability by successfully generating accurate HyPerModels of 10 trial functions laying the foundation for future work with N-dimensional NURBs in design space applications.
A few remarks on recurrence relations for geometrically continuous piecewise Chebyshevian B-splines
NASA Astrophysics Data System (ADS)
Mazure, Marie-Laurence
2009-08-01
This works complements a recent article (Mazure, J. Comp. Appl. Math. 219(2):457-470, 2008) in which we showed that T. Lyche's recurrence relations for Chebyshevian B-splines (Lyche, Constr. Approx. 1:155-178, 1985) naturally emerged from blossoms and their properties via de Boor type algorithms. Based on Chebyshevian divided differences, T. Lyche's approach concerned splines with all sections in the same Chebyshev space and with ordinary connections at the knots. Here, we consider geometrically continuous piecewise Chebyshevian splines, namely, splines with sections in different Chebyshev spaces, and with geometric connections at the knots. In this general framework, we proved in (Mazure, Constr. Approx. 20:603-624, 2004) that existence of B-spline bases could not be separated from existence of blossoms. Actually, the present paper enhances the powerfulness of blossoms in which not only B-splines are inherent, but also their recurrence relations. We compare this fact with the work by G. Mühlbach and Y. Tang (Mühlbach and Tang, Num. Alg. 41:35-78, 2006) who obtained the same recurrence relations via generalised Chebyshevian divided differences, but only under some total positivity assumption on the connexion matrices. We illustrate this comparison with splines with four-dimensional sections. The general situation addressed here also enhances the differences of behaviour between B-splines and the functions of smaller and smaller supports involved in the recurrence relations.
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
Entezari, Alireza; Möller, Torsten
2006-01-01
In this article we propose a box spline and its variants for reconstructing volumetric data sampled on the Cartesian lattice. In particular we present a tri-variate box spline reconstruction kernel that is superior to tensor product reconstruction schemes in terms of recovering the proper Cartesian spectrum of the underlying function. This box spline produces a C2 reconstruction that can be considered as a three dimensional extension of the well known Zwart-Powell element in 2D. While its smoothness and approximation power are equivalent to those of the tri-cubic B-spline, we illustrate the superiority of this reconstruction on functions sampled on the Cartesian lattice and contrast it to tensor product B-splines. Our construction is validated through a Fourier domain analysis of the reconstruction behavior of this box spline. Moreover, we present a stable method for evaluation of this box spline by means of a decomposition. Through a convolution, this decomposition reduces the problem to evaluation of a four directional box spline that we previously published in its explicit closed form.
NASA Technical Reports Server (NTRS)
Karray, Fakhreddine; Dwyer, Thomas A. W., III
1990-01-01
A bilinear model of the vibrational dynamics of a deformable maneuvering body is described. Estimates of the deformation state are generated through a low dimensional operator spline interpolator of bilinear systems combined with a feedback linearized based observer. Upper bounds on error estimates are also generated through the operator spline, and potential application to shaping control purposes is highlighted.
B-LUT: Fast and low memory B-spline image interpolation.
Sarrut, David; Vandemeulebroucke, Jef
2010-08-01
We propose a fast alternative to B-splines in image processing based on an approximate calculation using precomputed B-spline weights. During B-spline indirect transformation, these weights are efficiently retrieved in a nearest-neighbor fashion from a look-up table, greatly reducing overall computation time. Depending on the application, calculating a B-spline using a look-up table, called B-LUT, will result in an exact or approximate B-spline calculation. In case of the latter the obtained accuracy can be controlled by the user. The method is applicable to a wide range of B-spline applications and has very low memory requirements compared to other proposed accelerations. The performance of the proposed B-LUTs was compared to conventional B-splines as implemented in the popular ITK toolkit for the general case of image intensity interpolation. Experiments illustrated that highly accurate B-spline approximation can be obtained all while computation time is reduced with a factor of 5-6. The B-LUT source code, compatible with the ITK toolkit, has been made freely available to the community.
Reconstruction of irregularly-sampled volumetric data in efficient box spline spaces.
Xu, Xie; Alvarado, Alexander Singh; Entezari, Alireza
2012-07-01
We present a variational framework for the reconstruction of irregularly-sampled volumetric data in, nontensor-product, spline spaces. Motivated by the sampling-theoretic advantages of body centered cubic (BCC) lattice, this paper examines the BCC lattice and its associated box spline spaces in a variational setting. We introduce a regularization scheme for box splines that allows us to utilize the BCC lattice in a variational reconstruction framework. We demonstrate that by choosing the BCC lattice over the commonly-used Cartesian lattice, as the shift-invariant representation, one can increase the quality of signal reconstruction. Moreover, the computational cost of the reconstruction process is reduced in the BCC framework due to the smaller bandwidth of the system matrix in the box spline space compared to the corresponding tensor-product B-spline space. The improvements in accuracy are quantified numerically and visualized in our experiments with synthetic as well as real biomedical datasets.
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)
A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...
Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-19
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.
Improved Regression Calibration
ERIC Educational Resources Information Center
Skrondal, Anders; Kuha, Jouni
2012-01-01
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…
Gerber, Samuel; Rübel, Oliver; Bremer, Peer-Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-01
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduce a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse-Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this paper introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to over-fitting. The Morse-Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse-Smale regression. Supplementary materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse-Smale complex approximation and additional tables for the climate-simulation study. PMID:23687424
Schmid, Matthias; Wickler, Florian; Maloney, Kelly O.; Mitchell, Richard; Fenske, Nora; Mayr, Andreas
2013-01-01
Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures. PMID:23626706
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
Restoration of noisy blurred images by a smoothing spline filter.
Peyrovian, M J; Sawchuk, A A
1977-12-01
For the restoration of noisy blurred images, a controllable smoothing criterion based on the locally variable statistics and minimization of the second derivative is defined, and the corresponding filter, applicable to both space-variant and space-invariant degradations, is obtained. The output of this filter is a cubic spline function. The parameters of the filter determine the local smoothing window and over-all extent of smoothing, and thus the tradeoff between resolution and smoothing is controllable in a spatially nonstationary manner. The interesting properties of this filter have made it capable of restoring signal-dependent noisy images, and it has been successfully applied for filtering images degraded by film-grain noise. Since the matrices of this filter are banded circulant or Toeplitz, efficient algorithms are used for matrix manipulations.
Multiquadric Spline-Based Interactive Segmentation of Vascular Networks
Meena, Sachin; Surya Prasath, V. B.; Kassim, Yasmin M.; Maude, Richard J.; Glinskii, Olga V.; Glinsky, Vladislav V.; Huxley, Virginia H.; Palaniappan, Kannappan
2016-01-01
Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches. PMID:28227856
Biharmonic spline interpolation of GEOS-3 and Seasat altimeter data
NASA Technical Reports Server (NTRS)
Sandwell, David T.
1987-01-01
An algorithm is presented for determining the minimum curvature surface passing through a set of nonuniformly spaced data points. The curve is generated as a linear combination of Green functions for the biharmonic operator at each data point, with the amplitudes of the functions adjusted so that the interpolating surfaces passes through each point. The function passing through the points can be regarded as a spline to which point forces are applied, defining the minimum curvature between the points. The technique was used to combine the along track slopes of the GEOS-3 and Seasat altimeter data into a consistent geoid height map of the Caribbean area, covering 0.5 million data points in the process. Sample images are provided and new topographic features that are revealed are discussed.
Multiquadric Spline-Based Interactive Segmentation of Vascular Networks.
Meena, Sachin; Surya Prasath, V B; Kassim, Yasmin M; Maude, Richard J; Glinskii, Olga V; Glinsky, Vladislav V; Huxley, Virginia H; Palaniappan, Kannappan
2016-08-01
Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches.
George: Gaussian Process regression
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel
2015-11-01
George is a fast and flexible library, implemented in C++ with Python bindings, for Gaussian Process regression useful for accounting for correlated noise in astronomical datasets, including those for transiting exoplanet discovery and characterization and stellar population modeling.
Understanding poisson regression.
Hayat, Matthew J; Higgins, Melinda
2014-04-01
Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes.
Spline Histogram Method for Reconstruction of Probability Density Functions of Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Docenko, Dmitrijs; Berzins, Karlis
We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from www.virac.lv/en/soft.html.
Higher-order numerical solutions using cubic splines. [for partial differential equations
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1975-01-01
A cubic spline collocation procedure has recently been developed for the numerical solution of partial differential equations. In the present paper, this spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a non-uniform mesh and overall fourth-order accuracy for a uniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, will be presented for several model problems.-
NASA Astrophysics Data System (ADS)
Curà, Francesca; Mura, Andrea
2013-11-01
Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.
Comparison of cubic B-spline and Zernike-fitting techniques in complex wavefront reconstruction
Ares, M.; Royo, S
2006-09-20
We analyze an alternative to classical Zernike fitting based on the cubic B-spline model, and compare the strengths and weaknesses of each representation over a set of different wave fronts that cover a wide range of shape complexity. The results obtained show that a Zernike low-degree polynomial expansion or a cubic B-spline with a low number of breakpoints are the best choices for fitting simple wave fronts, whereas the cubic B-spline approach performs much better when more complex wave fronts are involved.The effect of noise level in the fit quality for the different wave fronts is also studied.
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.; Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.
2008-07-14
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensity that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting
NASA Technical Reports Server (NTRS)
Kirkpatrick, J. C.
1976-01-01
A tabulation of selected altitude-correlated values of pressure, density, speed of sound, and coefficient of viscosity for each of six models of the atmosphere is presented in block data format. Interpolation for the desired atmospheric parameters is performed by using cubic spline functions. The recursive relations necessary to compute the cubic spline function coefficients are derived and implemented in subroutine form. Three companion subprograms, which form the preprocessor and processor, are also presented. These subprograms, together with the data element, compose the spline fit atmosphere package. Detailed FLOWGM flow charts and FORTRAN listings of the atmosphere package are presented in the appendix.
Bicubic B-spline interpolation method for two-dimensional Laplace's equations
NASA Astrophysics Data System (ADS)
Abd Hamid, Nur Nadiah; Majid, Ahmad Abd.; Ismail, Ahmad Izani Md.
2013-04-01
Two-dimensional Laplace's equation is solved using bicubic B-spline interpolation method. An arbitrary surface with some unknown coefficients is generated using bicubic B-spline surface's formula. This surface is presumed to be the solution for the equation. The values of the coefficients are calculated by spline interpolation technique using the corresponding differential equations and boundary conditions. This method produces approximated analytical solution for the equation. A numerical example will be presented along with a comparison of the results with finite element and isogeometrical methods.
Revisiting Regression in Autism: Heller's "Dementia Infantilis"
ERIC Educational Resources Information Center
Westphal, Alexander; Schelinski, Stefanie; Volkmar, Fred; Pelphrey, Kevin
2013-01-01
Theodor Heller first described a severe regression of adaptive function in normally developing children, something he termed dementia infantilis, over one 100 years ago. Dementia infantilis is most closely related to the modern diagnosis, childhood disintegrative disorder. We translate Heller's paper, Uber Dementia Infantilis, and discuss…
Bias associated with using the estimated propensity score as a regression covariate.
Hade, Erinn M; Lu, Bo
2014-01-15
The use of propensity score methods to adjust for selection bias in observational studies has become increasingly popular in public health and medical research. A substantial portion of studies using propensity score adjustment treat the propensity score as a conventional regression predictor. Through a Monte Carlo simulation study, Austin and colleagues. investigated the bias associated with treatment effect estimation when the propensity score is used as a covariate in nonlinear regression models, such as logistic regression and Cox proportional hazards models. We show that the bias exists even in a linear regression model when the estimated propensity score is used and derive the explicit form of the bias. We also conduct an extensive simulation study to compare the performance of such covariate adjustment with propensity score stratification, propensity score matching, inverse probability of treatment weighted method, and nonparametric functional estimation using splines. The simulation scenarios are designed to reflect real data analysis practice. Instead of specifying a known parametric propensity score model, we generate the data by considering various degrees of overlap of the covariate distributions between treated and control groups. Propensity score matching excels when the treated group is contained within a larger control pool, while the model-based adjustment may have an edge when treated and control groups do not have too much overlap. Overall, adjusting for the propensity score through stratification or matching followed by regression or using splines, appears to be a good practical strategy.
Kanai, Takayuki; Kadoya, Noriyuki; Ito, Kengo; Onozato, Yusuke; Cho, Sang Yong; Kishi, Kazuma; Dobashi, Suguru; Umezawa, Rei; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2014-11-01
Deformable image registration (DIR) is fundamental technique for adaptive radiotherapy and image-guided radiotherapy. However, further improvement of DIR is still needed. We evaluated the accuracy of B-spline transformation-based DIR implemented in elastix. This registration package is largely based on the Insight Segmentation and Registration Toolkit (ITK), and several new functions were implemented to achieve high DIR accuracy. The purpose of this study was to clarify whether new functions implemented in elastix are useful for improving DIR accuracy. Thoracic 4D computed tomography images of ten patients with esophageal or lung cancer were studied. Datasets for these patients were provided by DIR-lab (dir-lab.com) and included a coordinate list of anatomical landmarks that had been manually identified. DIR between peak-inhale and peak-exhale images was performed with four types of parameter settings. The first one represents original ITK (Parameter 1). The second employs the new function of elastix (Parameter 2), and the third was created to verify whether new functions improve DIR accuracy while keeping computational time (Parameter 3). The last one partially employs a new function (Parameter 4). Registration errors for these parameter settings were calculated using the manually determined landmark pairs. 3D registration errors with standard deviation over all cases were 1.78 (1.57), 1.28 (1.10), 1.44 (1.09) and 1.36 (1.35) mm for Parameter 1, 2, 3 and 4, respectively, indicating that the new functions are useful for improving DIR accuracy, even while maintaining the computational time, and this B-spline-based DIR could be used clinically to achieve high-accuracy adaptive radiotherapy.
Heterogeneous modeling of medical image data using B-spline functions.
Grove, Olya; Rajab, Khairan; Les Piegl, A
2012-10-01
Biomedical data visualization and modeling rely predominately on manual processing and utilization of voxel- and facet-based homogeneous models. Biological structures are naturally heterogeneous and it is important to incorporate properties, such as material composition, size and shape, into the modeling process. A method to approximate image density data with a continuous B-spline surface is presented. The proposed approach generates a density point cloud, based on medical image data to reproduce heterogeneity across the image, through point densities. The density point cloud is ordered and approximated with a set of B-spline curves. A B-spline surface is lofted through the cross-sectional B-spline curves preserving the heterogeneity of the point cloud dataset. Preliminary results indicate that the proposed methodology produces a mathematical representation capable of capturing and preserving density variations with high fidelity.
Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report
NASA Technical Reports Server (NTRS)
1978-01-01
Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.
A B-spline method used to calculate added resistance in waves
NASA Astrophysics Data System (ADS)
Zangeneh, Razieh; Ghiasi, Mahmood
2017-03-01
Making an exact computation of added resistance in sea waves is of high interest due to the economic effects relating to ship design and operation. In this paper, a B-spline based method is developed for computation of added resistance. Based on the potential flow assumption, the velocity potential is computed using Green's formula. The Kochin function is applied to compute added resistance using Maruo's far-field method, the body surface is described by a B-spline curve and potentials and normal derivation of potentials are also described by B-spline basis functions and B-spline derivations. A collocation approach is applied for numerical computation, and integral equations are then evaluated by applying Gauss-Legendre quadrature. Computations are performed for a spheroid and different hull forms; results are validated by a comparison with experimental results. All results obtained with the present method show good agreement with experimental results.
A B-spline method used to calculate added resistance in waves
NASA Astrophysics Data System (ADS)
Zangeneh, Razieh; Ghiasi, Mahmood
2017-01-01
Making an exact computation of added resistance in sea waves is of high interest due to the economic effects relating to ship design and operation. In this paper, a B-spline based method is developed for computation of added resistance. Based on the potential flow assumption, the velocity potential is computed using Green's formula. The Kochin function is applied to compute added resistance using Maruo's far-field method, the body surface is described by a B-spline curve and potentials and normal derivation of potentials are also described by B-spline basis functions and B-spline derivations. A collocation approach is applied for numerical computation, and integral equations are then evaluated by applying Gauss-Legendre quadrature. Computations are performed for a spheroid and different hull forms; results are validated by a comparison with experimental results. All results obtained with the present method show good agreement with experimental results.
Conformal interpolating algorithm based on B-spline for aspheric ultra-precision machining
NASA Astrophysics Data System (ADS)
Li, Chenggui; Sun, Dan; Wang, Min
2006-02-01
Numeric control machining and on-line compensation for aspheric surface are key techniques for ultra-precision machining. In this paper, conformal cubic B-spline interpolating curve is first applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal B-spline interpolation, comparison was made between linear and circular interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by B-spline is higher than by line and by circle arc. The algorithm is benefit to increasing the surface form precision of workpiece during ultra-precision machining.
NASA Astrophysics Data System (ADS)
Mautz, R.; Ping, J.; Heki, K.; Schaffrin, B.; Shum, C.; Potts, L.
2005-05-01
Wavelet expansion has been demonstrated to be suitable for the representation of spatial functions. Here we propose the so-called B-spline wavelets to represent spatial time-series of GPS-derived global ionosphere maps (GIMs) of the vertical total electron content (TEC) from the Earth’s surface to the mean altitudes of GPS satellites, over Japan. The scalar-valued B-spline wavelets can be defined in a two-dimensional, but not necessarily planar, domain. Generated by a sequence of knots, different degrees of B-splines can be implemented: degree 1 represents the Haar wavelet; degree 2, the linear B-spline wavelet, or degree 4, the cubic B-spline wavelet. A non-uniform version of these wavelets allows us to handle data on a bounded domain without any edge effects. B-splines are easily extended with great computational efficiency to domains of arbitrary dimensions, while preserving their properties. This generalization employs tensor products of B-splines, defined as linear superposition of products of univariate B-splines in different directions. The data and model may be identical at the locations of the data points if the number of wavelet coefficients is equal to the number of grid points. In addition, data compression is made efficient by eliminating the wavelet coefficients with negligible magnitudes, thereby reducing the observational noise. We applied the developed methodology to the representation of the spatial and temporal variations of GIM from an extremely dense GPS network, the GPS Earth Observation Network (GEONET) in Japan. Since the sampling of the TEC is registered regularly in time, we use a two-dimensional B-spline wavelet representation in space and a one-dimensional spline interpolation in time. Over the Japan region, the B-spline wavelet method can overcome the problem of bias for the spherical harmonic model at the boundary, caused by the non-compact support. The hierarchical decomposition not only allows an inexpensive calculation, but also
[Understanding logistic regression].
El Sanharawi, M; Naudet, F
2013-10-01
Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications. PMID:28319131
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
Solid T-spline Construction from Boundary Representations for Genus-Zero Geometry
2011-11-14
approximation error is less than a pre-defined threshold. T-mesh is then obtained by pillowing the subdivision result one layer on the boundary and...then obtained by pillowing the subdivision result one layer on the boundary and its quality is improved. Templates are implemented to handle...blue) and one pillowed layer (magenta). 2 (Non-Uniform Rational B-Spline) or T-splines as a basis for analysis, requires models with a volu- metric
Solution of three-dimensional flow problems using a flux-spline method
NASA Technical Reports Server (NTRS)
Karki, K.; Mongia, H.; Patankar, S.
1989-01-01
This paper reports the application of a flux-spline scheme to three-dimensional fluid flow problems. The performance of this scheme is contrasted with that of the power-law differencing scheme. The numerical results are compared with reference solutions available in the literature. For the problems considered in this study, the flux-spline scheme is significantly more accurate than the power-law scheme.
Bicubic B-spline interpolation method for two-dimensional heat equation
NASA Astrophysics Data System (ADS)
Hamid, Nur Nadiah Abd.; Majid, Ahmad Abd.; Ismail, Ahmad Izani Md.
2015-10-01
Two-dimensional heat equation was solved using bicubic B-spline interpolation method. An arbitrary surface equation was generated by bicubic B-spline equation. This equation was incorporated in the heat equation after discretizing the time using finite difference method. An under-determined system of linear equation was obtained and solved to obtain the approximate analytical solution for the problem. This method was tested on one example.
Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G
2011-06-28
We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of
Practical Session: Logistic Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.
A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images
Wang, Yangping; Wang, Song
2016-01-01
The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653
Variational B-spline level-set: a linear filtering approach for fast deformable model evolution.
Bernard, Olivier; Friboulet, Denis; Thévenaz, Philippe; Unser, Michael
2009-06-01
In the field of image segmentation, most level-set-based active-contour approaches take advantage of a discrete representation of the associated implicit function. We present in this paper a different formulation where the implicit function is modeled as a continuous parametric function expressed on a B-spline basis. Starting from the active-contour energy functional, we show that this formulation allows us to compute the solution as a restriction of the variational problem on the space spanned by the B-splines. As a consequence, the minimization of the functional is directly obtained in terms of the B-spline coefficients. We also show that each step of this minimization may be expressed through a convolution operation. Because the B-spline functions are separable, this convolution may in turn be performed as a sequence of simple 1-D convolutions, which yields an efficient algorithm. As a further consequence, each step of the level-set evolution may be interpreted as a filtering operation with a B-spline kernel. Such filtering induces an intrinsic smoothing in the algorithm, which can be controlled explicitly via the degree and the scale of the chosen B-spline kernel. We illustrate the behavior of this approach on simulated as well as experimental images from various fields.
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.
Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao
2016-01-01
The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).
Registration of sliding objects using direction dependent B-splines decomposition
NASA Astrophysics Data System (ADS)
Delmon, V.; Rit, S.; Pinho, R.; Sarrut, D.
2013-03-01
Sliding motion is a challenge for deformable image registration because it leads to discontinuities in the sought deformation. In this paper, we present a method to handle sliding motion using multiple B-spline transforms. The proposed method decomposes the sought deformation into sliding regions to allow discontinuities at their interfaces, but prevents unrealistic solutions by forcing those interfaces to match. The method was evaluated on 16 lung cancer patients against a single B-spline transform approach and a multi B-spline transforms approach without the sliding constraint at the interface. The target registration error (TRE) was significantly lower with the proposed method (TRE = 1.5 mm) than with the single B-spline approach (TRE = 3.7 mm) and was comparable to the multi B-spline approach without the sliding constraint (TRE = 1.4 mm). The proposed method was also more accurate along region interfaces, with 37% less gaps and overlaps when compared to the multi B-spline transforms without the sliding constraint. This work was presented in part at the 4th International Workshop on Pulmonary Image Analysis during the Medical Image Computing and Computer Assisted Intervention (MICCAI) in Toronto, Canada (2011).
Ridge Regression: A Regression Procedure for Analyzing correlated Independent Variables
ERIC Educational Resources Information Center
Rakow, Ernest A.
1978-01-01
Ridge regression is a technique used to ameliorate the problem of highly correlated independent variables in multiple regression analysis. This paper explains the fundamentals of ridge regression and illustrates its use. (JKS)
Modern Regression Discontinuity Analysis
ERIC Educational Resources Information Center
Bloom, Howard S.
2012-01-01
This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…
Multiple linear regression analysis
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive…
Sliding control of pointing and tracking with operator spline estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Fakhreddine, Karray; Kim, Jinho
1989-01-01
It is shown how a variable structure control technique could be implemented to achieve precise pointing and good tracking of a deformable structure subject to fast slewing maneuvers. The correction torque that has to be applied to the structure is based on estimates of upper bounds on the model errors. For a rapid rotation of the deformable structure, the elastic response can be modeled by oscillators driven by angular acceleration, and where stiffness and damping coefficients are also angular velocity and acceleration dependent. By transforming this slew-driven elastic dynamics into bilinear form (be regarding the vector made up of the angular velocity, squared angular velocity and angular acceleration components, which appear in the coefficients as the input to the deformation dynamics), an operator spline can be constructed, that gives a low order estimate of the induced disturbance. Moreover, a worst case error bound between the estimated deformation and the unknown exact deformation is also generated, which can be used where required in the sliding control correction.
Spline modelling electron insert factors using routine measurements.
Biggs, S; Sobolewski, M; Murry, R; Kenny, J
2016-01-01
There are many methods available to predict electron output factors; however, many centres still measure the factors for each irregular electron field. Creating an electron output factor prediction model that approaches measurement accuracy--but uses already available data and is simple to implement--would be advantageous in the clinical setting. This work presents an empirical spline model for output factor prediction that requires only the measured factors for arbitrary insert shapes. Equivalent ellipses of the insert shapes are determined and then parameterised by width and ratio of perimeter to area. This takes into account changes in lateral scatter, bremsstrahlung produced in the insert material, and scatter from the edge of the insert. Agreement between prediction and measurement for the 12 MeV validation data had an uncertainty of 0.4% (1SD). The maximum recorded deviation between measurement and prediction over the range of energies was 1.0%. The validation methodology showed that one may expect an approximate uncertainty of 0.5% (1SD) when as little as eight data points are used. The level of accuracy combined with the ease with which this model can be generated demonstrates its suitability for clinical use. Implementation of this method is freely available for download at https://github.com/SimonBiggs/electronfactors.
[Calculation of radioimmunochemical determinations by "spline approximation" (author's transl)].
Nolte, H; Mühlen, A; Hesch, R D; Pape, J; Warnecke, U; Jüppner, H
1976-06-01
A simplified method, based on the "spline approximation", is reported for the calculation of the standard curves of radioimmunochemical determinations. It is possible to manipulate the mathematical function with a pocket calculator, thus making it available for a large number of users. It was shown that, in contrast to the usual procedures, it is possible to achieve optimal quality control in the preparation of the standard curves and in the interpolation of unknown plasma samples. The recaluculation of interpolated values from their own standard curve revealed an error of 4.9% which would normally be an error of interpolation. The new method was compared with two established methods for 8 different radioimmunochemical determinations. The measured values of the standard curve showed a weighting, and there was a resulting quality control of these values, which, according to their statistical evalution, were more accurate than those of the others models (Ekins et al., Yalow et al., (1968), in: Radioisotopes in Medicine: in vitro studies (Hayes, R. L., Goswitz, F.A. & Murphy, B. E. P., eds) USA EC, Oak Ridge) and Rodbard et al. (1971), in: Competitive protein Binding Assys(Odell, W. D. & Danghedy, W. H., eds.) Lipincott, Philadelphia and Toronto). In contrast with these other models, the described method makes no mathematical or kinetic preconditions with respect to the dose-response relationship. To achieve optimal reaction conditions, experimentally determined reaction data are preferable to model theories.
Inversion of ellipsometry data using constrained spline analysis.
Gilliot, Mickaël
2017-02-01
Ellipsometry is a highly sensitive and powerful optical technique of thin film characterization. However, the indirect and nonlinear character of the ellipsometric equations requires numerical extraction of interesting information, such as thicknesses and optical constants of unknown layers. A method is described to perform the inversion of ellipsometric spectra for the simultaneous determination of thickness and optical constants without requiring particular assumptions about the shape of a model dielectric function like in the traditional method of data fitting. The method is based on a Kramers-Kronig consistent description of the imaginary part of the dielectric function using a set of points joined by pieces of third-degree polynomials. Particular connection relations constrain the shape of the constructed curve to a physically meaningful curve avoiding oscillations of natural cubic splines. The connection ordinates conditioning the shape of the dielectric function can be used, together with unknown thickness or roughness, as fitting parameters with no restriction on the material nature. Typical examples are presented concerning metal and semiconductors.
Relative risk regression analysis of epidemiologic data.
Prentice, R L
1985-11-01
Relative risk regression methods are described. These methods provide a unified approach to a range of data analysis problems in environmental risk assessment and in the study of disease risk factors more generally. Relative risk regression methods are most readily viewed as an outgrowth of Cox's regression and life model. They can also be viewed as a regression generalization of more classical epidemiologic procedures, such as that due to Mantel and Haenszel. In the context of an epidemiologic cohort study, relative risk regression methods extend conventional survival data methods and binary response (e.g., logistic) regression models by taking explicit account of the time to disease occurrence while allowing arbitrary baseline disease rates, general censorship, and time-varying risk factors. This latter feature is particularly relevant to many environmental risk assessment problems wherein one wishes to relate disease rates at a particular point in time to aspects of a preceding risk factor history. Relative risk regression methods also adapt readily to time-matched case-control studies and to certain less standard designs. The uses of relative risk regression methods are illustrated and the state of development of these procedures is discussed. It is argued that asymptotic partial likelihood estimation techniques are now well developed in the important special case in which the disease rates of interest have interpretations as counting process intensity functions. Estimation of relative risks processes corresponding to disease rates falling outside this class has, however, received limited attention. The general area of relative risk regression model criticism has, as yet, not been thoroughly studied, though a number of statistical groups are studying such features as tests of fit, residuals, diagnostics and graphical procedures. Most such studies have been restricted to exponential form relative risks as have simulation studies of relative risk estimation
A B-Spline Method for Solving the Navier Stokes Equations
Johnson, Richard Wayne
2005-01-01
Collocation methods using piece-wise polynomials, including B-splines, have been developed to find approximate solutions to both ordinary and partial differential equations. Such methods are elegant in their simplicity and efficient in their application. The spline collocation method is typically more efficient than traditional Galerkin finite element methods, which are used to solve the equations of fluid dynamics. The collocation method avoids integration. Exact formulae are available to find derivatives on spline curves and surfaces. The primary objective of the present work is to determine the requirements for the successful application of B-spline collocation to solve the coupled, steady, 2D, incompressible Navier–Stokes and continuity equations for laminar flow. The successful application of B-spline collocation included the development of ad hoc method dubbed the Boundary Residual method to deal with the presence of the pressure terms in the Navier–Stokes equations. Historically, other ad hoc methods have been developed to solve the incompressible Navier–Stokes equations, including the artificial compressibility, pressure correction and penalty methods. Convergence studies show that the ad hoc Boundary Residual method is convergent toward an exact (manufactured) solution for the 2D, steady, incompressible Navier–Stokes and continuity equations. C1 cubic and quartic B-spline schemes employing orthogonal collocation and C2 cubic and C3 quartic B-spline schemes with collocation at the Greville points are investigated. The C3 quartic Greville scheme is shown to be the most efficient scheme for a given accuracy, even though the C1 quartic orthogonal scheme is the most accurate for a given partition. Two solution approaches are employed, including a globally-convergent zero-finding Newton's method using an LU decomposition direct solver and the variable-metric minimization method using BFGS update.
Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.
2007-02-02
PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data.
Csébfalvi, Balázs
2010-01-01
In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively.
Evaluation of the spline reconstruction technique for PET
Kastis, George A. Kyriakopoulou, Dimitra; Gaitanis, Anastasios; Fernández, Yolanda; Hutton, Brian F.; Fokas, Athanasios S.
2014-04-15
Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real
Calculating a Stepwise Ridge Regression.
ERIC Educational Resources Information Center
Morris, John D.
1986-01-01
Although methods for using ordinary least squares regression computer programs to calculate a ridge regression are available, the calculation of a stepwise ridge regression requires a special purpose algorithm and computer program. The correct stepwise ridge regression procedure is given, and a parallel FORTRAN computer program is described.…
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings
Guo, Y.; Keller, J.; Errichello, R.; Halse, C.
2013-12-01
Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.
Using spline-enhanced ordinary differential equations for PK/PD model development.
Wang, Yi; Eskridge, Kent; Zhang, Shunpu; Wang, Dong
2008-10-01
A spline-enhanced ordinary differential equation (ODE) method is proposed for developing a proper parametric kinetic ODE model and is shown to be a useful approach to PK/PD model development. The new method differs substantially from a previously proposed model development approach using a stochastic differential equation (SDE)-based method. In the SDE-based method, a Gaussian diffusion term is introduced into an ODE to quantify the system noise. In our proposed method, we assume an ODE system with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function vector that is estimated using penalized splines. B(t) is used to construct a quantitative measure of model uncertainty useful for finding the proper model structure for a given data set. By means of two examples with simulated data, we demonstrate that the spline-enhanced ODE method can provide model diagnostics and serve as a basis for systematic model development similar to the SDE-based method. We compare and highlight the differences between the SDE-based and the spline-enhanced ODE methods of model development. We conclude that the spline-enhanced ODE method can be useful for PK/PD modeling since it is based on a relatively uncomplicated estimation algorithm which can be implemented with readily available software, provides numerically stable, robust estimation for many models, is distribution-free and allows for identification and accommodation of model deficiencies due to model misspecification.
A trans-dimensional polynomial-spline parameterization for gradient-based geoacoustic inversion.
Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan
2014-10-01
This paper presents a polynomial spline-based parameterization for trans-dimensional geoacoustic inversion. The parameterization is demonstrated for both simulated and measured data and shown to be an effective method of representing sediment geoacoustic profiles dominated by gradients, as typically occur, for example, in muddy seabeds. Specifically, the spline parameterization is compared using the deviance information criterion (DIC) to the standard stack-of-homogeneous layers parameterization for the inversion of bottom-loss data measured at a muddy seabed experiment site on the Malta Plateau. The DIC is an information criterion that is well suited to trans-D Bayesian inversion and is introduced to geoacoustics in this paper. Inversion results for both parameterizations are in good agreement with measurements on a sediment core extracted at the site. However, the spline parameterization more accurately resolves the power-law like structure of the core density profile and provides smaller overall uncertainties in geoacoustic parameters. In addition, the spline parameterization is found to be more parsimonious, and hence preferred, according to the DIC. The trans-dimensional polynomial spline approach is general, and applicable to any inverse problem for gradient-based profiles. [Work supported by ONR.].
Practical box splines for reconstruction on the body centered cubic lattice.
Entezari, Alireza; Van De Ville, Dimitri; Möeller, Torsten
2008-01-01
We introduce a family of box splines for efficient, accurate and smooth reconstruction of volumetric data sampled on the Body Centered Cubic (BCC) lattice, which is the favorable volumetric sampling pattern due to its optimal spectral sphere packing property. First, we construct a box spline based on the four principal directions of the BCC lattice that allows for a linear C(0) reconstruction. Then, the design is extended for higher degrees of continuity. We derive the explicit piecewise polynomial representation of the C(0) and C(2) box splines that are useful for practical reconstruction applications. We further demonstrate that approximation in the shift-invariant space---generated by BCC-lattice shifts of these box splines---is {twice} as efficient as using the tensor-product B-spline solutions on the Cartesian lattice (with comparable smoothness and approximation order, and with the same sampling density). Practical evidence is provided demonstrating that not only the BCC lattice is generally a more accurate sampling pattern, but also allows for extremely efficient reconstructions that outperform tensor-product Cartesian reconstructions.
Kramer, S.
1996-12-31
In many real-world domains the task of machine learning algorithms is to learn a theory for predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with nondeterminate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems. SRT integrates the statistical method of regression trees into ILP. It constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy.
Ryu, Duchwan; Li, Erning; Mallick, Bani K
2011-06-01
We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves.
Penalized splines for smooth representation of high-dimensional Monte Carlo datasets
NASA Astrophysics Data System (ADS)
Whitehorn, Nathan; van Santen, Jakob; Lafebre, Sven
2013-09-01
Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique (Marx and Eilers, 1996) [1] to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.
Cubic spline interpolation of functions with high gradients in boundary layers
NASA Astrophysics Data System (ADS)
Blatov, I. A.; Zadorin, A. I.; Kitaeva, E. V.
2017-01-01
The cubic spline interpolation of grid functions with high-gradient regions is considered. Uniform meshes are proved to be inefficient for this purpose. In the case of widely applied piecewise uniform Shishkin meshes, asymptotically sharp two-sided error estimates are obtained in the class of functions with an exponential boundary layer. It is proved that the error estimates of traditional spline interpolation are not uniform with respect to a small parameter, and the error can increase indefinitely as the small parameter tends to zero, while the number of nodes N is fixed. A modified cubic interpolation spline is proposed, for which O((ln N/N)4) error estimates that are uniform with respect to the small parameter are obtained.
NASA Technical Reports Server (NTRS)
Wahba, Grace
1987-01-01
A partial spline model is a model for a response as a function of several variables, which is the sum of a smooth function of several variables and a parametric function of the same plus possibly some other variables. Partial spline models in one and several variables, with direct and indirect data, with Gaussian errors and as an extension of GLIM to partially penalized GLIM models are described. Application to the modeling of change of regime in several variables is described. Interaction splines are introduced and described and their potential use for modeling non-linear interactions between variables by semiparametric methods is noted. Reference is made to recent work in efficient computational methods.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Hybrid Gaussian-B-spline basis for the electronic continuum: Photoionization of atomic hydrogen
NASA Astrophysics Data System (ADS)
Marante, Carlos; Argenti, Luca; Martín, Fernando
2014-07-01
As a first step towards meeting the recent demand for new computational tools capable of reproducing molecular-ionization continua in a wide energy range, we introduce a hybrid Gaussian-B-spline basis (GABS) that combines short-range Gaussian functions, compatible with standard quantum-chemistry computational codes, with B splines, a basis appropriate to represent electronic continua. We illustrate the performance of the GABS hybrid basis for the hydrogen atom by solving both the time-independent and the time-dependent Schrödinger equation for a few representative cases. The results are in excellent agreement with those obtained with a purely B-spline basis, with analytical results, when available, and with recent above-threshold ionization spectra from the literature. In the latter case, we report fully differential photoelectron distributions which offer further insight into the process of above-threshold ionization at different wavelengths.
Spline-based distributed system identification with application to large space antennas
NASA Technical Reports Server (NTRS)
Banks, H. T.; Lamm, P. K.; Armstrong, E. S.
1986-01-01
A parameter and state estimation technique for distributed models is demonstrated through the solution of a problem generic to large space antenna system identification. Assuming the position of the reflective surface of the maypole (hoop/column) antenna to be approximated by the static two-dimensional, stretched-membrane partial differential equation with variable-stiffness coefficient functions, a spline-based approximation procedure is described that estimates the shape and stiffness functions from data set observations. For given stiffness functions, the Galerkin projection with linear spline-based functions is applied to project the distributed problem onto a finite-dimensional subspace wherein algebraic equations exist for determining a static shape (state) prediction. The stiffness functions are then parameterized by cubic splines and the parameters estimated by an output error technique. Numerical results are presented for data descriptive of a 100-m-diameter maypole antenna.
NASA Technical Reports Server (NTRS)
Kuhl, Mark R.
1990-01-01
Current navigation requirements depend on a geometric dilution of precision (GDOP) criterion. As long as the GDOP stays below a specific value, navigation requirements are met. The GDOP will exceed the specified value when the measurement geometry becomes too collinear. A new signal processing technique, called Ridge Regression Processing, can reduce the effects of nearly collinear measurement geometry; thereby reducing the inflation of the measurement errors. It is shown that the Ridge signal processor gives a consistently better mean squared error (MSE) in position than the Ordinary Least Mean Squares (OLS) estimator. The applicability of this technique is currently being investigated to improve the following areas: receiver autonomous integrity monitoring (RAIM), coverage requirements, availability requirements, and precision approaches.
Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms
NASA Astrophysics Data System (ADS)
Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie
2006-02-01
This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2009-01-01
This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…
Streamflow forecasting using functional regression
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.
2016-07-01
Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.
BSR: B-spline atomic R-matrix codes
NASA Astrophysics Data System (ADS)
Zatsarinny, Oleg
2006-02-01
BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput
Dynamic coefficients of axial spline couplings in high-speed rotating machinery
NASA Astrophysics Data System (ADS)
Ku, C. P. Roger; Walton, J. F., Jr.; Lund, J. W.
1994-07-01
This paper provided the first opportunity to quantify the angular stiffness and equivalent viscous damping coefficients of an axial spline coupling used in high-speed turbomachinery. The bending moments and angular deflections transmitted across an axial spline coupling were measured while a nonrotating shaft was excited by an external shaker. A rotordynamics computer program was used to simulate the test conditions and to correlate the angular stiffness and damping coefficients. The effects of external force and frequency were also investigated. The angular stiffness and damping coefficients were used to perform a linear steady-state rotordynamics stability analysis, and the unstable natural frequency was calculated and compared to the experimental measurements.
Trigonometric quadratic B-spline subdomain Galerkin algorithm for the Burgers' equation
NASA Astrophysics Data System (ADS)
Ay, Buket; Dag, Idris; Gorgulu, Melis Zorsahin
2015-12-01
A variant of the subdomain Galerkin method has been set up to find numerical solutions of the Burgers' equation. Approximate function consists of the combination of the trigonometric B-splines. Integration of Burgers' equation has been achived by aid of the subdomain Galerkin method based on the trigonometric B-splines as an approximate functions. The resulting first order ordinary differential system has been converted into an iterative algebraic equation by use of the Crank-Nicolson method at successive two time levels. The suggested algorithm is tested on somewell-known problems for the Burgers' equation.
Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations
Kim, Sang Dong
1996-12-31
In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).
Spline-Based Parameter Estimation Techniques for Two-Dimensional Convection and Diffusion Equations.
1986-07-01
Bolling Air Force Base 12. NUMBER Of PAGES ’P4.SW% AENCY. &MNAME A AOORESSIf dii*ffru be= COai1001I1ud OCR..) to. SIECURI CLASS. (of Wel 00110") c5*~~L ?S...PAGE (SIIII Dots 8-0 20. Abstract A general approximation framework based on bicubic splines is developed for estimating temporally and spatially...Unannounced Justification By Distribution/ Availability Codes Avail and/or Dist Special ....., AFQR.h.87-0799 SPLINE- BASED PARAMETER ESTIMATION TECHNIQUES
Experimental and theoretical investigation about reaction moments in misaligned splined couplings
NASA Astrophysics Data System (ADS)
Curà, Francesca; Mura, Andrea
2014-04-01
This paper deals with the uneven loads generated when splined couplings work in misaligned conditions. These loads are balanced by the shafts bearings and they have to be taken into account by designers during the calculation of splined transmission systems. In particular an experimental investigation about tilting moment has been carried on by means of a dedicated test rig, in order to better understand this phenomenon. Experimental tests have been conducted in order to investigate the effect of misalignment angle, transmitted torque and tooth stiffness on the tilting moment. Also a numerical model has been developed in order to obtain a preliminary quick estimation of tilting moment values.
Splitting algorithms for the wavelet transform of first-degree splines on nonuniform grids
NASA Astrophysics Data System (ADS)
Shumilov, B. M.
2016-07-01
For the splines of first degree with nonuniform knots, a new type of wavelets with a biased support is proposed. Using splitting with respect to the even and odd knots, a new wavelet decomposition algorithm in the form of the solution of a three-diagonal system of linear algebraic equations with respect to the wavelet coefficients is proposed. The application of the proposed implicit scheme to the point prediction of time series is investigated for the first time. Results of numerical experiments on the prediction accuracy and the compression of spline wavelet decompositions are presented.
NASA Astrophysics Data System (ADS)
Evrendilek, F.; Karakaya, N.
2014-06-01
Continuous time-series measurements of diel dissolved oxygen (DO) through online sensors are vital to better understanding and management of metabolism of lake ecosystems, but are prone to noise. Discrete wavelet transforms (DWT) with the orthogonal Symmlet and the semiorthogonal Chui-Wang B-spline were compared in denoising diel, daytime and nighttime dynamics of DO, water temperature, pH, and chlorophyll-a. Predictive efficacies of multiple non-linear regression (MNLR) models of DO dynamics were evaluated with or without DWT denoising of either the response variable alone or all the response and explanatory variables. The combined use of the B-spline-based denoising of all the variables and the temporally partitioned data improved both the predictive power and the errors of the MNLR models better than the use of Symmlet DWT denoising of DO only or all the variables with or without the temporal partitioning.
2012-01-01
Background Genomic selection (GS) is emerging as an efficient and cost-effective method for estimating breeding values using molecular markers distributed over the entire genome. In essence, it involves estimating the simultaneous effects of all genes or chromosomal segments and combining the estimates to predict the total genomic breeding value (GEBV). Accurate prediction of GEBVs is a central and recurring challenge in plant and animal breeding. The existence of a bewildering array of approaches for predicting breeding values using markers underscores the importance of identifying approaches able to efficiently and accurately predict breeding values. Here, we comparatively evaluate the predictive performance of six regularized linear regression methods-- ridge regression, ridge regression BLUP, lasso, adaptive lasso, elastic net and adaptive elastic net-- for predicting GEBV using dense SNP markers. Methods We predicted GEBVs for a quantitative trait using a dataset on 3000 progenies of 20 sires and 200 dams and an accompanying genome consisting of five chromosomes with 9990 biallelic SNP-marker loci simulated for the QTL-MAS 2011 workshop. We applied all the six methods that use penalty-based (regularization) shrinkage to handle datasets with far more predictors than observations. The lasso, elastic net and their adaptive extensions further possess the desirable property that they simultaneously select relevant predictive markers and optimally estimate their effects. The regression models were trained with a subset of 2000 phenotyped and genotyped individuals and used to predict GEBVs for the remaining 1000 progenies without phenotypes. Predictive accuracy was assessed using the root mean squared error, the Pearson correlation between predicted GEBVs and (1) the true genomic value (TGV), (2) the true breeding value (TBV) and (3) the simulated phenotypic values based on fivefold cross-validation (CV). Results The elastic net, lasso, adaptive lasso and the
Transfer Learning Based on Logistic Regression
NASA Astrophysics Data System (ADS)
Paul, A.; Rottensteiner, F.; Heipke, C.
2015-08-01
In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.
M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
Asymmetric Lens Design Using Bicubic Splines: Application to the Color TV Lighthouse.
Vogl, T P; Rigler, A K; Canty, B R
1971-11-01
Any two-dimensional interpolation scheme which has continuous derivatives may be used to represent an optical surface for ray-tracing purposes. We present bicubic splines in their application to the design of asymmetric surfaces. An as example of a problem requiring an asymmetric system, we analyze the design problems of a color TV lighthouse lens.
Plane stress yield function described by 3rd-degree spline curve and its application
NASA Astrophysics Data System (ADS)
Aamaishi, Toshiro; Tsutamori, Hideo; Iizuka, Eiji; Sato, Kentaro; Ogihara, Yuki; Matsui, Yohei
2016-08-01
In this study, a plane stress yield function which is described by 3rd-degree spline curve is proposed. This yield function can predict a material anisotropy with flexibility and consider evolution of anisotropy in terms of both r values and stresses. As an application, hole expanding simulation results are shown to discuss accuracy of the proposed yield function.
Cubic Trigonometric B-spline Galerkin Methods for the Regularized Long Wave Equation
NASA Astrophysics Data System (ADS)
Irk, Dursun; Keskin, Pinar
2016-10-01
A numerical solution of the Regularized Long Wave (RLW) equation is obtained using Galerkin finite element method, based on Crank Nicolson method for the time integration and cubic trigonometric B-spline functions for the space integration. After two different linearization techniques are applied, the proposed algorithms are tested on the problems of propagation of a solitary wave and interaction of two solitary waves.
A B-spline approach to phase unwrapping in tagged cardiac MRI for motion tracking.
Chiang, Patricia; Cai, Yiyu; Mak, Koon Hou; Zheng, Jianmin
2013-05-01
A novel B-Spline based approach to phase unwrapping in tagged magnetic resonance images is proposed for cardiac motion tracking. A bicubic B-spline surface is used to model the absolute phase. The phase unwrapping problem is formulated as a mixed integer optimization problem that minimizes the sum of the difference between the spatial gradients of absolute and wrapped phases, and the difference between the rewrapped and wrapped phases. In contrast to the existing techniques for motion tracking, the proposed approach can overcome the limitation of interframe half-tag displacement and increase the robustness of motion tracking. The article further presents a hybrid harmonic phase imaging-B-spline method to take the advantage of the harmonic phase imaging method for small motion and the efficiency of the B-Spline approach for large motion. The proposed approach has been successively applied to a full set of cardiac MRI scans in both long and short axis slices with superior performance when compared with the harmonic phase imaging and quality guided path-following methods.
Parameter estimation technique for boundary value problems by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1988-01-01
A parameter-estimation technique for boundary-integral equations of the second kind is developed. The output least-squares identification technique using the spline collocation method is considered. The convergence analysis for the numerical method is discussed. The results are applied to boundary parameter estimations for two-dimensional Laplace and Helmholtz equations.
Weber, J. W.; Hansen, T. A. R.; Sanden, M. C. M. van de; Engeln, R.
2009-12-15
The remote plasma deposition of hydrogenated amorphous carbon (a-C:H) thin films is investigated by in situ spectroscopic ellipsometry (SE). The dielectric function of the a-C:H film is in this paper parametrized by means of B-splines. In contrast with the commonly used Tauc-Lorentz oscillator, B-splines are a purely mathematical description of the dielectric function. We will show that the B-spline parametrization, which requires no prior knowledge about the film or its interaction with light, is a fast and simple-to-apply method that accurately determines thickness, surface roughness, and the dielectric constants of hydrogenated amorphous carbon thin films. Analysis of the deposition process provides us with information about the high deposition rate, the nucleation stage, and the homogeneity in depth of the deposited film. Finally, we show that the B-spline parametrization can serve as a stepping stone to physics-based models, such as the Tauc-Lorentz oscillator.
Test Score Reporting Referenced to Doubly-Moderated Cut Scores Using Splines
ERIC Educational Resources Information Center
Schafer, William D.; Hou, Xiaodong
2011-01-01
This study discusses and presents an example of a use of spline functions to establish and report test scores using a moderated system of any number of cut scores. Our main goals include studying the need for and establishing moderated standards and creating a reporting scale that is referenced to all the standards. Our secondary goals are to make…
Classification by means of B-spline potential functions with applications to remote sensing
NASA Technical Reports Server (NTRS)
Bennett, J. O.; Defigueiredo, R. J. P.; Thompson, J. R.
1974-01-01
A method is presented for using B-splines as potential functions in the estimation of likelihood functions (probability density functions conditioned on pattern classes), or the resulting discriminant functions. The consistency of this technique is discussed. Experimental results of using the likelihood functions in the classification of remotely sensed data are given.
A spline-based parameter and state estimation technique for static models of elastic surfaces
NASA Technical Reports Server (NTRS)
Banks, H. T.; Daniel, P. L.; Armstrong, E. S.
1983-01-01
Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.
Wave Propagation by Way of Exponential B-Spline Galerkin Method
NASA Astrophysics Data System (ADS)
Zorsahin Gorgulu, M.; Dag, I.; Irk, D.
2016-10-01
In this paper, the exponential B-spline Galerkin method is set up for getting the numerical solution of the Burgers’ equation. Two numerical examples related to shock wave propagation and travelling wave are studied to illustrate the accuracy and the efficiency of the method. Obtained results are compared with some early studies.
Spline energy method and its application in the structural analysis of antenna reflector
NASA Astrophysics Data System (ADS)
Wang, Deman; Wu, Qinbao
A method is proposed for analyzing combined structures consisting of shell and beam (rib) members. The cubic B spline function is used to interpolate the displacements and the total potential energy of the shell and the ribs. The equilibrium simultaneous equations can be obtained according to the principle of minimum potential energy.
A Unified Representation Scheme for Solid Geometric Objects Using B-splines (extended Abstract)
NASA Technical Reports Server (NTRS)
Bahler, D.
1985-01-01
A geometric representation scheme called the B-spline cylinder, which consists of interpolation between pairs of uniform periodic cubic B-spline curves is discussed. This approach carries a number of interesting implications. For one, a single relatively simple database schema can be used to represent a reasonably large class of objects, since the spline representation is flexible enough to allow a large domain of representable objects at very little cost in data complexity. The model is thus very storage-efficient. A second feature of such a system is that it reduces to one the number of routines which the system must support to perform a given operation on objects. Third, the scheme enables easy conversion to and from other representations. The formal definition of the cylinder entity is given. In the geometric properties of the entity are explored and several operations on such objects are defined. Some general purpose criteria for evaluating any geometric representation scheme are introduced and the B-spline cylinder scheme according to these criteria is evaluated.
Optimization and dynamics of protein-protein complexes using B-splines.
Gillilan, Richard E; Lilien, Ryan H
2004-10-01
A moving-grid approach for optimization and dynamics of protein-protein complexes is introduced, which utilizes cubic B-spline interpolation for rapid energy and force evaluation. The method allows for the efficient use of full electrostatic potentials joined smoothly to multipoles at long distance so that multiprotein simulation is possible. Using a recently published benchmark of 58 protein complexes, we examine the performance and quality of the grid approximation, refining cocrystallized complexes to within 0.68 A RMSD of interface atoms, close to the optimum 0.63 A produced by the underlying MMFF94 force field. We quantify the theoretical statistical advantage of using minimization in a stochastic search in the case of two rigid bodies, and contrast it with the underlying cost of conjugate gradient minimization using B-splines. The volumes of conjugate gradient minimization basins of attraction in cocrystallized systems are generally orders of magnitude larger than well volumes based on energy thresholds needed to discriminate native from nonnative states; nonetheless, computational cost is significant. Molecular dynamics using B-splines is doubly efficient due to the combined advantages of rapid force evaluation and large simulation step sizes. Large basins localized around the native state and other possible binding sites are identifiable during simulations of protein-protein motion. In addition to providing increased modeling detail, B-splines offer new algorithmic possibilities that should be valuable in refining docking candidates and studying global complex behavior.
Model building in nonproportional hazard regression.
Rodríguez-Girondo, Mar; Kneib, Thomas; Cadarso-Suárez, Carmen; Abu-Assi, Emad
2013-12-30
Recent developments of statistical methods allow for a very flexible modeling of covariates affecting survival times via the hazard rate, including also the inspection of possible time-dependent associations. Despite their immediate appeal in terms of flexibility, these models typically introduce additional difficulties when a subset of covariates and the corresponding modeling alternatives have to be chosen, that is, for building the most suitable model for given data. This is particularly true when potentially time-varying associations are given. We propose to conduct a piecewise exponential representation of the original survival data to link hazard regression with estimation schemes based on of the Poisson likelihood to make recent advances for model building in exponential family regression accessible also in the nonproportional hazard regression context. A two-stage stepwise selection approach, an approach based on doubly penalized likelihood, and a componentwise functional gradient descent approach are adapted to the piecewise exponential regression problem. These three techniques were compared via an intensive simulation study. An application to prognosis after discharge for patients who suffered a myocardial infarction supplements the simulation to demonstrate the pros and cons of the approaches in real data analyses.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint.
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2016-04-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Flickner, M; Hafner, J; Rodriguez, E J; Sanz, J C
1996-01-01
Presents a new covariant basis, dubbed the quasi-orthogonal Q-spline basis, for the space of n-degree periodic uniform splines with k knots. This basis is obtained analogously to the B-spline basis by scaling and periodically translating a single spline function of bounded support. The construction hinges on an important theorem involving the asymptotic behavior (in the dimension) of the inverse of banded Toeplitz matrices. The authors show that the Gram matrix for this basis is nearly diagonal, hence, the name "quasi-orthogonal". The new basis is applied to the problem of approximating closed digital curves in 2D images by least-squares fitting. Since the new spline basis is almost orthogonal, the least-squares solution can be approximated by decimating a convolution between a resolution-dependent kernel and the given data. The approximating curve is expressed as a linear combination of the new spline functions and new "control points". Another convolution maps these control points to the classical B-spline control points. A generalization of the result has relevance to the solution of regularized fitting problems.
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2017-03-01
Freeform surfaces like B-splines have proven to be a suitable tool to model laser scanner point clouds and to form the basis for an areal data analysis, for example an areal deformation analysis. A variety of parameters determine the B-spline's appearance, the B-spline's complexity being mostly determined by the number of control points. Usually, this parameter type is chosen by intuitive trial-and-error-procedures. In [10] the problem of finding an alternative to these trial-and-error-procedures was addressed for the case of B-spline curves: The task of choosing the optimal number of control points was interpreted as a model selection problem. Two model selection criteria, the Akaike and the Bayesian Information Criterion, were used to identify the B-spline curve with the optimal number of control points from a set of candidate B-spline models. In order to overcome the drawbacks of the information criteria, an alternative approach based on statistical learning theory was developed. The criteria were evaluated by means of simulated data sets. The present paper continues these investigations. If necessary, the methods proposed in [10] are extended to areal approaches so that they can be used to determine the optimal number of B-spline surface control points. Furthermore, the methods are evaluated by means of real laser scanner data sets rather than by simulated ones. The application of those methods to B-spline surfaces reveals the datum problem of those surfaces, meaning that location and number of control points of two B-splines surfaces are only comparable if they are based on the same parameterization. First investigations to solve this problem are presented.
Metz, C T; Klein, S; Schaap, M; van Walsum, T; Niessen, W J
2011-04-01
A registration method for motion estimation in dynamic medical imaging data is proposed. Registration is performed directly on the dynamic image, thus avoiding a bias towards a specifically chosen reference time point. Both spatial and temporal smoothness of the transformations are taken into account. Optionally, cyclic motion can be imposed, which can be useful for visualization (viewing the segmentation sequentially) or model building purposes. The method is based on a 3D (2D+time) or 4D (3D+time) free-form B-spline deformation model, a similarity metric that minimizes the intensity variances over time and constrained optimization using a stochastic gradient descent method with adaptive step size estimation. The method was quantitatively compared with existing registration techniques on synthetic data and 3D+t computed tomography data of the lungs. This showed subvoxel accuracy while delivering smooth transformations, and high consistency of the registration results. Furthermore, the accuracy of semi-automatic derivation of left ventricular volume curves from 3D+t computed tomography angiography data of the heart was evaluated. On average, the deviation from the curves derived from the manual annotations was approximately 3%. The potential of the method for other imaging modalities was shown on 2D+t ultrasound and 2D+t magnetic resonance images. The software is publicly available as an extension to the registration package elastix.
NASA Astrophysics Data System (ADS)
Xia, Peng; Tahara, Tatsuki; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu
2013-03-01
To improve the quality of reconstructed images, we apply bicubic interpolation and B-spline interpolation to parallel phase-shifting digital holography for the first time. The effectiveness of bilinear interpolation, bicubic interpolation, and B-spline interpolation in parallel phase-shifting digital holography is shown by a numerical simulation. In the simulation result, the application of bicubic interpolation and B-spline interpolation succeeded in decreasing the rootmean- square error of the reconstructed image by 12.6 and 11.9%, respectively.
Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…
Error bounds in cascading regressions
Karlinger, M.R.; Troutman, B.M.
1985-01-01
Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.
Churpek, Matthew M; Yuen, Trevor C; Winslow, Christopher; Meltzer, David O; Kattan, Michael W; Edelson, Dana P
2016-01-01
OBJECTIVE Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. DESIGN Observational cohort study. SETTING Five hospitals, from November 2008 until January 2013. PATIENTS Hospitalized ward patients INTERVENTIONS None MEASUREMENTS AND MAIN RESULTS Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score (MEWS), a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC 0.80 [95% CI 0.80–0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC 0.77 vs 0.74; p<0.01), and all models were more accurate than the MEWS (AUC 0.70 [95% CI 0.70–0.70]). CONCLUSIONS In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards. PMID:26771782
Epidemiology of CKD Regression in Patients under Nephrology Care.
Borrelli, Silvio; Leonardis, Daniela; Minutolo, Roberto; Chiodini, Paolo; De Nicola, Luca; Esposito, Ciro; Mallamaci, Francesca; Zoccali, Carmine; Conte, Giuseppe
2015-01-01
Chronic Kidney Disease (CKD) regression is considered as an infrequent renal outcome, limited to early stages, and associated with higher mortality. However, prevalence, prognosis and the clinical correlates of CKD regression remain undefined in the setting of nephrology care. This is a multicenter prospective study in 1418 patients with established CKD (eGFR: 60-15 ml/min/1.73m²) under nephrology care in 47 outpatient clinics in Italy from a least one year. We defined CKD regressors as a ΔGFR ≥0 ml/min/1.73 m2/year. ΔGFR was estimated as the absolute difference between eGFR measured at baseline and at follow up visit after 18-24 months, respectively. Outcomes were End Stage Renal Disease (ESRD) and overall-causes Mortality.391 patients (27.6%) were identified as regressors as they showed an eGFR increase between the baseline visit in the renal clinic and the follow up visit. In multivariate regression analyses the regressor status was not associated with CKD stage. Low proteinuria was the main factor associated with CKD regression, accounting per se for 48% of the likelihood of this outcome. Lower systolic blood pressure, higher BMI and absence of autosomal polycystic disease (PKD) were additional predictors of CKD regression. In regressors, ESRD risk was 72% lower (HR: 0.28; 95% CI 0.14-0.57; p<0.0001) while mortality risk did not differ from that in non-regressors (HR: 1.16; 95% CI 0.73-1.83; p = 0.540). Spline models showed that the reduction of ESRD risk associated with positive ΔGFR was attenuated in advanced CKD stage. CKD regression occurs in about one-fourth patients receiving renal care in nephrology units and correlates with low proteinuria, BP and the absence of PKD. This condition portends better renal prognosis, mostly in earlier CKD stages, with no excess risk for mortality.
Logistic Regression: Concept and Application
ERIC Educational Resources Information Center
Cokluk, Omay
2010-01-01
The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
Tenderholt, A.; Hedman, B.; Hodgson, K.O.
2007-01-08
PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k{sup 3}-weighted EXAFS data.
NASA Technical Reports Server (NTRS)
Wahba, G.
1982-01-01
Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.
McCurdy, C. William; Horner, Daniel A.; Rescigno, Thomas N.; Martin, Fernando
2004-02-19
Calculations of absolute triple differential and single differential cross sections for helium double photoionization are performed using an implementation of exterior complex scaling in B-splines. Results for cross sections, well-converged in partial waves, are presented and compared with both experiment and earlier theoretical calculations. These calculations establish the practicality and effectiveness of the complex B-spline approach to calculations of double ionization of atomic and molecular systems.
Semi-parametric analysis of dynamic contrast-enhanced MRI using Bayesian P-splines.
Schmid, Volker J; Whitcher, Brandon; Yang, Guang-Zhong
2006-01-01
Current approaches to quantitative analysis of DCE-MRI with non-linear models involve the convolution of an arterial input function (AIF) with the contrast agent concentration at a voxel or regional level. Full quantification provides meaningful biological parameters but is complicated by the issues related to convergence, (de-)convolution of the AIF, and goodness of fit. To overcome these problems, this paper presents a penalized spline smoothing approach to model the data in a semi-parametric way. With this method, the AIF is convolved with a set of B-splines to produce the design matrix, and modeling of the resulting deconvolved biological parameters is obtained in a way that is similar to the parametric models. Further kinetic parameters are obtained by fitting a non-linear model to the estimated response function and detailed validation of the method, both with simulated and in vivo data is
The design and characterization of wideband spline-profiled feedhorns for Advanced ACTPol
NASA Astrophysics Data System (ADS)
Simon, Sara M.; Austermann, Jason; Beall, James A.; Choi, Steve K.; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Henderson, Shawn W.; Hills, Felicity B.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Josaitis, Alec; Koopman, Brian J.; McMahon, Jeff J.; Nati, Federico; Newburgh, Laura; Niemack, Michael D.; Salatino, Maria; Schillaci, Alessandro; Schmitt, Benjamin L.; Staggs, Suzanne T.; Vavagiakis, Eve M.; Ward, Jonathan; Wollack, Edward J.
2016-07-01
Advanced ACTPol (AdvACT) is an upgraded camera for the Atacama Cosmology Telescope (ACT) that will measure the cosmic microwave background in temperature and polarization over a wide range of angular scales and five frequency bands from 28-230 GHz. AdvACT will employ four arrays of feedhorn-coupled, polarization- sensitive multichroic detectors. To accommodate the higher pixel packing densities necessary to achieve Ad- vACT's sensitivity goals, we have developed and optimized wideband spline-profiled feedhorns for the AdvACT multichroic arrays that maximize coupling efficiency while carefully controlling polarization systematics. We present the design, fabrication, and testing of wideband spline-profiled feedhorns for the multichroic arrays of AdvACT.
B-spline parameterization of spatial response in a monolithic scintillation camera
NASA Astrophysics Data System (ADS)
Solovov, V.; Morozov, A.; Chepel, V.; Domingos, V.; Martins, R.
2016-09-01
A framework for parameterization of the light response functions (LRFs) in a scintillation camera is presented. It is based on approximation of the measured or simulated photosensor response with weighted sums of uniform cubic B-splines or their tensor products. The LRFs represented in this way are smooth, computationally inexpensive to evaluate and require much less computer memory than non-parametric alternatives. The parameters are found in a straightforward way by the linear least squares method. Several techniques that allow to reduce the storage and processing power requirements were developed. A software library for fitting simulated and measured light response with spline functions was developed and integrated into an open source software package ANTS2 designed for simulation and data processing for Anger camera type detectors.
Mat Zin, Shazalina; Abbas, Muhammad; Majid, Ahmad Abd; Ismail, Ahmad Izani Md
2014-01-01
The generalized nonlinear Klien-Gordon equation plays an important role in quantum mechanics. In this paper, a new three-time level implicit approach based on cubic trigonometric B-spline is presented for the approximate solution of this equation with Dirichlet boundary conditions. The usual finite difference approach is used to discretize the time derivative while cubic trigonometric B-spline is applied as an interpolating function in the space dimension. Several examples are discussed to exhibit the feasibility and capability of the approach. The absolute errors and L∞ error norms are also computed at different times to assess the performance of the proposed approach and the results were found to be in good agreement with known solutions and with existing schemes in literature.
Two-dimensional mesh embedding for Galerkin B-spline methods
NASA Technical Reports Server (NTRS)
Shariff, Karim; Moser, Robert D.
1995-01-01
A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.
The Design and Characterization of Wideband Spline-profiled Feedhorns for Advanced Actpol
NASA Technical Reports Server (NTRS)
Simon, Sara M.; Austermann, Jason; Beall, James A.; Choi, Steve K.; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Henderson, Shawn W.; Hills, Felicity B.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Josaitis, Alec; Koopman, Brian J.; McMahon, Jeff J.; Nati, Federico; Newburgh, Laura; Niemack, Michael D.; Salatino, Maria; Schillaci, Alessandro; Wollack, Edward J.
2016-01-01
Advanced ACTPol (AdvACT) is an upgraded camera for the Atacama Cosmology Telescope (ACT) that will measure the cosmic microwave background in temperature and polarization over a wide range of angular scales and five frequency bands from 28-230 GHz. AdvACT will employ four arrays of feedhorn-coupled, polarization- sensitive multichroic detectors. To accommodate the higher pixel packing densities necessary to achieve Ad- vACTs sensitivity goals, we have developed and optimized wideband spline-profiled feedhorns for the AdvACT multichroic arrays that maximize coupling efficiency while carefully controlling polarization systematics. We present the design, fabrication, and testing of wideband spline-profiled feedhorns for the multichroic arrays of AdvACT.
Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.
Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco
2015-04-20
Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
NASA Astrophysics Data System (ADS)
Huang, Lei; Xue, Junpeng; Gao, Bo; Zuo, Chao; Idir, Mourad
2017-04-01
In this work, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. The noise influence is studied by adding white Gaussian noise to the slope data. Experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.
Convergence of a Fourier-spline representation for the full-turn map generator
Warnock, R.L.; Ellison, J.A.
1997-04-01
Single-turn data from a symplectic tracking code can be used to construct a canonical generator for a full-turn symplectic map. This construction has been carried out numerically in canonical polar coordinates, the generator being obtained as a Fourier series in angle coordinates with coefficients that are spline functions of action coordinates. Here the authors provide a mathematical basis for the procedure, finding sufficient conditions for the existence of the generator and convergence of the Fourier-spline expansion. The analysis gives insight concerning analytic properties of the generator, showing that in general there are branch points as a function of angle and inverse square root singularities at the origin as a function of action.
Goh, Joan; Hj. M. Ali, Norhashidah
2015-01-01
Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG) iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method. PMID:26182211
Surface evaluation with Ronchi test by using Malacara formula, genetic algorithms, and cubic splines
NASA Astrophysics Data System (ADS)
Cordero-Dávila, Alberto; González-García, Jorge
2010-08-01
In the manufacturing process of an optical surface with rotational symmetry the ideal ronchigram is simulated and compared with the experimental ronchigram. From this comparison the technician, based on your experience, estimated the error on the surface. Quantitatively, the error on the surface can be described by a polynomial e(ρ2) and the coefficients can be estimated from data of the ronchigrams (real and ideal) to solve a system of nonlinear differential equations which are related to the Malacara formula of the transversal aberration. To avoid the problems inherent in the use of polynomials it proposed to describe the errors on the surface by means of cubic splines. The coefficients of each spline are estimated from a discrete set of errors (ρi,ei) and these are evaluated by means of genetic algorithms to reproduce the experimental ronchigrama starting from the ideal.
NASA Astrophysics Data System (ADS)
Mohammadi, Reza
2014-03-01
In this study, the exponential spline scheme is implemented to find a numerical solution of the nonlinear Schrödinger equations with constant and variable coefficients. The method is based on the Crank-Nicolson formulation for time integration and exponential spline functions for space integration. The error analysis, existence, stability, uniqueness and convergence properties of the method are investigated using the energy method. We show that the method is unconditionally stable and accurate of orders O(k+kh+h2) and O(k+kh+h4). This method is tested on three examples by using the cubic nonlinear Schrödinger equation with constant and variable coefficients and the Gross-Pitaevskii equation. The computed results are compared wherever possible with those already available in the literature. The results show that the derived method is easily implemented and approximate the exact solution very well.
Spline model of the high latitude scintillation based on in situ satellite data
NASA Astrophysics Data System (ADS)
Priyadarshi, S.; Wernik, A. W.
2013-12-01
We present a spline model for the high latitude ionospheric scintillation using satellite in situ measurements made by the Dynamic Explorer 2 (DE 2) satellite. DE 2 satellite measurements give observations only along satellite orbit but our interpolation model fills the gaps between the satellite orbits. This analytical model is based on products of cubic B-splines and coefficients determined by least squares fit to the binned data and constrained to make the fit periodic in 24 hours of geomagnetic local time, periodic in 360 degrees of invariant longitude, in geomagnetic indices and solar radio flux. Discussion of our results clearly shows the seasonal and diurnal behavior of ionospheric parameters important in scintillation modeling for different geophysical and solar activity conditions. We also show that results obtained from our analytical model match observations obtained from in situ measurements. Shishir Priyadarshi Space Research Centre, Poland
Numerical method using cubic B-spline for a strongly coupled reaction-diffusion system.
Abbas, Muhammad; Majid, Ahmad Abd; Md Ismail, Ahmad Izani; Rashid, Abdur
2014-01-01
In this paper, a numerical method for the solution of a strongly coupled reaction-diffusion system, with suitable initial and Neumann boundary conditions, by using cubic B-spline collocation scheme on a uniform grid is presented. The scheme is based on the usual finite difference scheme to discretize the time derivative while cubic B-spline is used as an interpolation function in the space dimension. The scheme is shown to be unconditionally stable using the von Neumann method. The accuracy of the proposed scheme is demonstrated by applying it on a test problem. The performance of this scheme is shown by computing L∞ and L2 error norms for different time levels. The numerical results are found to be in good agreement with known exact solutions.
Ruberti, M.; Averbukh, V.; Decleva, P.
2014-10-28
We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.
Shulga, Dmytro; Morozov, Oleksii; Hunziker, Patrick
2016-12-19
Optical Diffusion Tomography (ODT) is a modern non-invasive medical imaging modality which requires mathematical modelling of near-infrared light propagation in tissue. Solving the ODT forward problem equation accurately and efficiently is crucial. Typically, the forward problem is represented by a Diffusion PDE and is solved using the Finite Element Method (FEM) on a mesh, which is often unstructured. Tensor B-spline signal processing has the attractive features of excellent interpolation and approximation properties, multiscale properties, fast algorithms and does not require meshing. This paper introduces Tensor B-spline methodology with arbitrary spline degree tailored to solve the ODT forward problem in an accurate and efficient manner. We show that our Tensor B-spline formulation induces efficient and highly parallelizable computational algorithms. Exploitation of B-spline properties for integration over irregular domains proved valuable. The Tensor B-spline solver was tested on standard problems and on synthetic medical data and compared to FEM, including state-ofthe art ODT forward solvers. Results show that 1) a significantly higher accuracy can be achieved with the same number of nodes, 2) fewer nodes are required to achieve a prespecified accuracy, 3) the algorithm converges in significantly fewer iterations to a given error. These findings support the value of Tensor Bspline methodology for high-performance ODT implementations. This may translate into advances in ODT imaging for biomedical research and clinical application.
Ruberti, M; Averbukh, V; Decleva, P
2014-10-28
We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.
Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method
NASA Technical Reports Server (NTRS)
Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.
1997-01-01
A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.
Analysis of myocardial motion using generalized spline models and tagged magnetic resonance images
NASA Astrophysics Data System (ADS)
Chen, Fang; Rose, Stephen E.; Wilson, Stephen J.; Veidt, Martin; Bennett, Cameron J.; Doddrell, David M.
2000-06-01
Heart wall motion abnormalities are the very sensitive indicators of common heart diseases, such as myocardial infarction and ischemia. Regional strain analysis is especially important in diagnosing local abnormalities and mechanical changes in the myocardium. In this work, we present a complete method for the analysis of cardiac motion and the evaluation of regional strain in the left ventricular wall. The method is based on the generalized spline models and tagged magnetic resonance images (MRI) of the left ventricle. The whole method combines dynamical tracking of tag deformation, simulating cardiac movement and accurately computing the regional strain distribution. More specifically, the analysis of cardiac motion is performed in three stages. Firstly, material points within the myocardium are tracked over time using a semi-automated snake-based tag tracking algorithm developed for this purpose. This procedure is repeated in three orthogonal axes so as to generate a set of one-dimensional sample measurements of the displacement field. The 3D-displacement field is then reconstructed from this sample set by using a generalized vector spline model. The spline reconstruction of the displacement field is explicitly expressed as a linear combination of a spline kernel function associated with each sample point and a polynomial term. Finally, the strain tensor (linear or nonlinear) with three direct components and three shear components is calculated by applying a differential operator directly to the displacement function. The proposed method is computationally effective and easy to perform on tagged MR images. The preliminary study has shown potential advantages of using this method for the analysis of myocardial motion and the quantification of regional strain.
Eye-Ball Rebuilding Using Splines with a View to Refractive Surgery Simulation
2001-07-01
Refractive Surgery Simulation DISTRIBUTION: Approved for public release, distribution unlimited This paper is part of the following report: TITLE: Algorithms...compilation report: ADP013708 thru ADP013761 UNCLASSIFIED Eye-ball rebuilding using splines with a view to refractive surgery simulation Mathieu Lamard...ophthalmology, refractive surgery has experiencied an important expansion for about fifteen years. It allows the surgeons to correct different refractive errors
A spline-based parameter estimation technique for static models of elastic structures
NASA Technical Reports Server (NTRS)
Dutt, P.; Ta'asan, S.
1989-01-01
The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.
Extracting dimensional geometric parameters from B-spline surface models of aircraft
NASA Technical Reports Server (NTRS)
Jayaram, U.; Myklebust, Arvid; Gelhausen, P.
1992-01-01
Research that creates techniques to automatically obtain dimensional geometric parameters from the nonuniform B-spline surface description of an object is presented. These techniques have been implemented successfully in the aircraft design software, ACSYNT, a computer-aided design system for conceptual aircraft design created at Virginia Tech and NASA Ames. The techniques created and implemented in this research are also of significance to general-purpose design.
A higher order non-polynomial spline method for fractional sub-diffusion problems
NASA Astrophysics Data System (ADS)
Li, Xuhao; Wong, Patricia J. Y.
2017-01-01
In this paper we shall develop a numerical scheme for a fractional sub-diffusion problem using parametric quintic spline. The solvability, convergence and stability of the scheme will be established and it is shown that the convergence order is higher than some earlier work done. We also present some numerical examples to illustrate the efficiency of the numerical scheme as well as to compare with other methods.
Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions
Jerome Blair
2008-05-15
An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.
Orthogonal cubic spline collocation method for the extended Fisher-Kolmogorov equation
NASA Astrophysics Data System (ADS)
Danumjaya, P.; Pani, Amiya K.
2005-02-01
A second-order splitting combined with orthogonal cubic spline collocation method is formulated and analysed for the extended Fisher-Kolmogorov equation. With the help of Lyapunov functional, a bound in maximum norm is derived for the semidiscrete solution. Optimal error estimates are established for the semidiscrete case. Finally, using the monomial basis functions we present the numerical results in which the integration in time is performed using RADAU 5 software library.
Isogeometric Divergence-conforming B-splines for the Darcy-Stokes-Brinkman Equations
2012-01-01
80 [54] G Stadler, M Gurnis, C Burstedde, L C Wilcox, L Alisic, and O Ghattas. The dynamics of plate tectonics and mantle flow: From local to global...Generalized Stokes equations, B-splines, Isogeometric analysis, Divergence-conforming discretizations 1 1 Introduction The Stokes equations describe a...the outward-facing normal to ∂Ω̂. As specified in the introduction , we choose to enforce no-slip boundary condition weakly using Nitsche’s method [48
B-spline soliton solution of the fifth order KdV type equations
NASA Astrophysics Data System (ADS)
Zahra, W. K.; Ouf, W. A.; El-Azab, M. S.
2013-10-01
In this paper, we develop a numerical solution based on sextic B-spline collocation method for solving the generalized fifth-order nonlinear evolution equations. Applying Von-Neumann stability analysis, the proposed technique is shown to be unconditionally stable. The accuracy of the presented method is demonstrated by a test problem. The numerical results are found to be in good agreement with the exact solution.
B-spline image model for energy minimization-based optical flow estimation.
Le Besnerais, Guy; Champagnat, Frédéric
2006-10-01
Robust estimation of the optical flow is addressed through a multiresolution energy minimization. It involves repeated evaluation of spatial and temporal gradients of image intensity which rely usually on bilinear interpolation and image filtering. We propose to base both computations on a single pyramidal cubic B-spline model of image intensity. We show empirically improvements in convergence speed and estimation error and validate the resulting algorithm on real test sequences.
Rank regression: an alternative regression approach for data with outliers.
Chen, Tian; Tang, Wan; Lu, Ying; Tu, Xin
2014-10-01
Linear regression models are widely used in mental health and related health services research. However, the classic linear regression analysis assumes that the data are normally distributed, an assumption that is not met by the data obtained in many studies. One method of dealing with this problem is to use semi-parametric models, which do not require that the data be normally distributed. But semi-parametric models are quite sensitive to outlying observations, so the generated estimates are unreliable when study data includes outliers. In this situation, some researchers trim the extreme values prior to conducting the analysis, but the ad-hoc rules used for data trimming are based on subjective criteria so different methods of adjustment can yield different results. Rank regression provides a more objective approach to dealing with non-normal data that includes outliers. This paper uses simulated and real data to illustrate this useful regression approach for dealing with outliers and compares it to the results generated using classical regression models and semi-parametric regression models.
Adaptive Local Linear Regression with Application to Printer Color Management
2008-01-01
values formed the test samples. This process guaranteed that the CIELAB test samples were in the gamut for each printer, but each printer had a...digital images has recently led to increased consumer demand for accurate color reproduction. Given a CIELAB color one would like to reproduce, the color...management problem is to determine what RGB color one must send the printer to minimize the error between the desired CIELAB color and the CIELAB
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Kamensky, David; Hsu, Ming-Chen; Yu, Yue; Evans, John A; Sacks, Michael S; Hughes, Thomas J R
2017-02-01
This paper uses a divergence-conforming B-spline fluid discretization to address the long-standing issue of poor mass conservation in immersed methods for computational fluid-structure interaction (FSI) that represent the influence of the structure as a forcing term in the fluid subproblem. We focus, in particular, on the immersogeometric method developed in our earlier work, analyze its convergence for linear model problems, then apply it to FSI analysis of heart valves, using divergence-conforming B-splines to discretize the fluid subproblem. Poor mass conservation can manifest as effective leakage of fluid through thin solid barriers. This leakage disrupts the qualitative behavior of FSI systems such as heart valves, which exist specifically to block flow. Divergence-conforming discretizations can enforce mass conservation exactly, avoiding this problem. To demonstrate the practical utility of immersogeometric FSI analysis with divergence-conforming B-splines, we use the methods described in this paper to construct and evaluate a computational model of an in vitro experiment that pumps water through an artificial valve.
Comparing tongue shapes from ultrasound imaging using smoothing spline analysis of variance.
Davidson, Lisa
2006-07-01
Ultrasound imaging of the tongue is increasingly common in speech production research. However, there has been little standardization regarding the quantification and statistical analysis of ultrasound data. In linguistic studies, researchers may want to determine whether the tongue shape for an articulation under two different conditions (e.g., consonants in word-final versus word-medial position) is the same or different. This paper demonstrates how the smoothing spline ANOVA (SS ANOVA) can be applied to the comparison of tongue curves [Gu, Smoothing Spline ANOVA Models (Springer, New York, 2002)]. The SS ANOVA is a technique for determining whether or not there are significant differences between the smoothing splines that are the best fits for two data sets being compared. If the interaction term of the SS ANOVA model is statistically significant, then the groups have different shapes. Since the interaction may be significant even if only a small section of the curves are different (i.e., the tongue root is the same, but the tip of one group is raised), Bayesian confidence intervals are used to determine which sections of the curves are statistically different. SS ANOVAs are illustrated with some data comparing obstruents produced in word-final and word-medial coda position.
Validation and comparison of geostatistical and spline models for spatial stream networks.
Rushworth, A M; Peterson, E E; Ver Hoef, J M; Bowman, A W
2015-08-01
Scientists need appropriate spatial-statistical models to account for the unique features of stream network data. Recent advances provide a growing methodological toolbox for modelling these data, but general-purpose statistical software has only recently emerged, with little information about when to use different approaches. We implemented a simulation study to evaluate and validate geostatistical models that use continuous distances, and penalised spline models that use a finite discrete approximation for stream networks. Data were simulated from the geostatistical model, with performance measured by empirical prediction and fixed effects estimation. We found that both models were comparable in terms of squared error, with a slight advantage for the geostatistical models. Generally, both methods were unbiased and had valid confidence intervals. The most marked differences were found for confidence intervals on fixed-effect parameter estimates, where, for small sample sizes, the spline models underestimated variance. However, the penalised spline models were always more computationally efficient, which may be important for real-time prediction and estimation. Thus, decisions about which method to use must be influenced by the size and format of the data set, in addition to the characteristics of the environmental process and the modelling goals. ©2015 The Authors. Environmetrics published by John Wiley & Sons, Ltd.
A nonrational B-spline profiled horn with high displacement amplification for ultrasonic welding.
Nguyen, Huu-Tu; Nguyen, Hai-Dang; Uan, Jun-Yen; Wang, Dung-An
2014-12-01
A new horn with high displacement amplification for ultrasonic welding is developed. The profile of the horn is a nonrational B-spline curve with an open uniform knot vector. The ultrasonic actuation of the horn exploits the first longitudinal displacement mode of the horn. The horn is designed by an optimization scheme and finite element analyses. Performances of the proposed horn have been evaluated by experiments. The displacement amplification of the proposed horn is 41.4% and 8.6% higher than that of the traditional catenoidal horn and a Bézier-profile horn, respectively, with the same length and end surface diameters. The developed horn has a lower displacement amplification than the nonuniform rational B-spline profiled horn but a much smoother stress distribution. The developed horn, the catenoidal horn, and the Bézier horn are fabricated and used for ultrasonic welding of lap-shear specimens. The bonding strength of the joints welded by the open uniform nonrational B-spline (OUNBS) horn is the highest among the three horns for the various welding parameters considered. The locations of the failure mode and the distribution of the voids of the specimens are investigated to explain the reason of the high bonding strength achieved by the OUNBS horn.
On developing B-spline registration algorithms for multi-core processors
NASA Astrophysics Data System (ADS)
Shackleford, J. A.; Kandasamy, N.; Sharp, G. C.
2010-11-01
Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.
Algebraic grid generation using tensor product B-splines. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Saunders, B. V.
1985-01-01
Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.
Noise correction on LANDSAT images using a spline-like algorithm
NASA Technical Reports Server (NTRS)
Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.
1985-01-01
Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.
A Cubic B-Spline Approach for Inter-Transformation Between Potential Field and Gradient Data
NASA Astrophysics Data System (ADS)
Wang, B.; Gao, S. S.
2008-12-01
Traditionally, algorithms involving Fast Fourier Transforms (FFT) are used to calculate gradients from field data and vise versa. Because the popular FFT differentiation algorithms are prone to noise, expensive field campaigns are increasingly utilized to obtain gradient data. In areas with both field and gradient data, transformation facilitates comparison. In areas with only one kind of data, transformation facilitates interpretation by transforming the measured data into another form of data. We advance unified formulae for interpolation, differentiation and integration using cubic B-splines, and propose new space-domain approaches for 2D and 3D transformations from potential field data to potential-field gradient data and vice versa. We also advance spline-based continuation techniques. In the spline-based algorithms, the spacing can be either regular or irregular. Analyses using synthetic and real gravity and magnetic data show that the new algorithms have higher accuracy, are more noise-tolerant and thus provide better insights into understanding the nature of the sources than the traditional FFT techniques.
Multiple Regression and Its Discontents
ERIC Educational Resources Information Center
Snell, Joel C.; Marsh, Mitchell
2012-01-01
Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-01-01
Summary Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this paper, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler’s method, trapezoidal rule and Runge-Kutta method. A higher order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators (DBE) are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods to an HIV study to further illustrate the usefulness of the proposed approaches. PMID:22376200
Wrong Signs in Regression Coefficients
NASA Technical Reports Server (NTRS)
McGee, Holly
1999-01-01
When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.
Chen, Sheng; Hong, Xia; Khalaf, Emad F; Alsaadi, Fuad E; Harris, Chris J
2016-09-23
Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.
NASA Astrophysics Data System (ADS)
Shen, Xiang; Liu, Bin; Li, Qing-Quan
2017-03-01
The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates
Binder, H; Sauerbrei, W
2010-03-30
When global techniques, based on fractional polynomials (FPs), are employed for modeling potentially nonlinear effects of several continuous covariates on a response, accessible model equations are obtained. However, local features might be missed. Therefore, a procedure is introduced, which systematically checks model fits, obtained by the multivariable fractional polynomial (MFP) approach, for overlooked local features. Statistically significant local polynomials are then parsimoniously added. This approach, called MFP + L, is seen to result in an effective control of the Type I error with respect to the addition of local components in a small simulation study with univariate and multivariable settings. Prediction performance is compared with that of a penalized regression spline technique. In a setting unfavorable for FPs, the latter outperforms the MFP approach, if there is much information in the data. However, the addition of local features reduces this performance difference. There is only a small detrimental effect in settings where the MFP approach performs better. In an application example with children's respiratory health data, fits from the spline-based approach indicate many local features, but MFP + L adds only few significant features, which seem to have good support in the data. The proposed approach may be expected to be superior in settings with local features, but retains the good properties of the MFP approach in a large number of settings where global functions are sufficient.
Embedded Sensors for Measuring Surface Regression
NASA Technical Reports Server (NTRS)
Gramer, Daniel J.; Taagen, Thomas J.; Vermaak, Anton G.
2006-01-01
The development and evaluation of new hybrid and solid rocket motors requires accurate characterization of the propellant surface regression as a function of key operational parameters. These characteristics establish the propellant flow rate and are prime design drivers affecting the propulsion system geometry, size, and overall performance. There is a similar need for the development of advanced ablative materials, and the use of conventional ablatives exposed to new operational environments. The Miniature Surface Regression Sensor (MSRS) was developed to serve these applications. It is designed to be cast or embedded in the material of interest and regresses along with it. During this process, the resistance of the sensor is related to its instantaneous length, allowing the real-time thickness of the host material to be established. The time derivative of this data reveals the instantaneous surface regression rate. The MSRS could also be adapted to perform similar measurements for a variety of other host materials when it is desired to monitor thicknesses and/or regression rate for purposes of safety, operational control, or research. For example, the sensor could be used to monitor the thicknesses of brake linings or racecar tires and indicate when they need to be replaced. At the time of this reporting, over 200 of these sensors have been installed into a variety of host materials. An MSRS can be made in either of two configurations, denoted ladder and continuous (see Figure 1). A ladder MSRS includes two highly electrically conductive legs, across which narrow strips of electrically resistive material are placed at small increments of length. These strips resemble the rungs of a ladder and are electrically equivalent to many tiny resistors connected in parallel. A substrate material provides structural support for the legs and rungs. The instantaneous sensor resistance is read by an external signal conditioner via wires attached to the conductive legs on the
Recursive bias estimation for high dimensional regression smoothers
Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric
2009-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.
2015-04-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume
Bignardi, A B; El Faro, L; Torres Júnior, R A A; Cardoso, V L; Machado, P F; Albuquerque, L G
2011-10-31
We analyzed 152,145 test-day records from 7317 first lactations of Holstein cows recorded from 1995 to 2003. Our objective was to model variations in test-day milk yield during the first lactation of Holstein cows by random regression model (RRM), using various functions in order to obtain adequate and parsimonious models for the estimation of genetic parameters. Test-day milk yields were grouped into weekly classes of days in milk, ranging from 1 to 44 weeks. The contemporary groups were defined as herd-test-day. The analyses were performed using a single-trait RRM, including the direct additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. The mean trend of milk yield was modeled with a fourth-order orthogonal Legendre polynomial. The additive genetic and permanent environmental covariance functions were estimated by random regression on two parametric functions, Ali and Schaeffer and Wilmink, and on B-spline functions of days in milk. The covariance components and the genetic parameters were estimated by the restricted maximum likelihood method. Results from RRM parametric and B-spline functions were compared to RRM on Legendre polynomials and with a multi-trait analysis, using the same data set. Heritability estimates presented similar trends during mid-lactation (13 to 31 weeks) and between week 37 and the end of lactation, for all RRM. Heritabilities obtained by multi-trait analysis were of a lower magnitude than those estimated by RRM. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. RRM using B-spline and Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data.
Survival Data and Regression Models
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.
An adaptive guidance law for single stage to low earth orbit
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.
1989-01-01
An adaptive guidance algorithm based on a cubic spline representation of the ascent profile and imposition of a dynamic pressure constraint is studied for a single stage to low earth orbit. The flight path is divided into initial and terminal phases. In the initial phase, fully adaptive, and in the terminal phase, semi-adaptive, guidance schemes are used. The cubic spline paqrameters are determined by gradient optimization for maximum payload to orbit. In the terminal phase, a linear quadratic regulator is used to derive the optimal feedback gains to keep the vehicle close to the nominal path. The computational aspects of the guidance algorithm are examined and criteria are developed to ensure stability and convergence.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A
2016-10-01
Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models.
Cactus: An Introduction to Regression
ERIC Educational Resources Information Center
Hyde, Hartley
2008-01-01
When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…
Multiple Regression: A Leisurely Primer.
ERIC Educational Resources Information Center
Daniel, Larry G.; Onwuegbuzie, Anthony J.
Multiple regression is a useful statistical technique when the researcher is considering situations in which variables of interest are theorized to be multiply caused. It may also be useful in those situations in which the researchers is interested in studies of predictability of phenomena of interest. This paper provides an introduction to…
Weighting Regressions by Propensity Scores
ERIC Educational Resources Information Center
Freedman, David A.; Berk, Richard A.
2008-01-01
Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…
Quantile Regression with Censored Data
ERIC Educational Resources Information Center
Lin, Guixian
2009-01-01
The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…
Zhu, Dian-ming; Jin, Wan-xiang; Luo, Xiao-sen; Liu, Ying; Shen, Zhong-hua; Lu, Jian; Ni, Xiao-wu
2008-08-01
For the low content and weak fluorescence intensity, usually presenting shoulder peaks, it is often hard to locate protoporphyrin IX and identify its fluorescence intensity in human blood serum. Biorthogonal spline wavelet may work for the identification of its weak signal Superimposing protoporphyrin IX fluorescence signal on the background of blood serum spectrum, a series of varied fluorescence spectra of them can be obtained. The protoporphyrin IX fluorescence signal from blood serum background is separated and the fluorescence spectrum can be divided into corresponding discrete approximate signals (a1-a7) and discrete details signals (d1-d7) by biorthogonal spline wavelet bior 5.5 seven levels decomposition. The signal frequency shows a gradual decrease with increasing decomposition. Protoporphyrin IX fluorescence peak emerges when it comes to the 7th decomposition. The signal peak shifts about 2.5 mm downwards as the signal intensity decreases, whereas the signal peak from wavelet filter remains where it was. As the synchronization disappears between signal intensity and signal peak, usually it is hard to assure the fluorescence intensity and peak location. However, signal from wavelet filter may ignore the affect and identify the protoporphyrin IX in human blood serum with the help of biorthogonal spline wavelet. As the linear alternation of wavelet and discrete details signals maintain their inborn linear relations, the authors can carry out the qualitative and quantitative analysis for the precise content and quantity of protoporphyrin IX in blood serum, which provides a feasible method for the application of blood serum fluorescence spectrum to tumor early diagnosis.
Modeling nonlinear relationships in ERP data using mixed-effects regression with R examples.
Tremblay, Antoine; Newman, Aaron J
2015-01-01
In the analysis of psychological and psychophysiological data, the relationship between two variables is often assumed to be a straight line. This may be due to the prevalence of the general linear model in data analysis in these fields, which makes this assumption implicitly. However, there are many problems for which this assumption does not hold. In this paper, we show that, in the analysis of event-related potential (ERP) data, the assumption of linearity comes at a cost and may significantly affect the inferences drawn from the data. We demonstrate why the assumption of linearity should be relaxed and how to model nonlinear relationships between ERP amplitudes and predictor variables within the familiar framework of generalized linear models, using regression splines and mixed-effects modeling.
Fault detection and diagnosis for singular stochastic systems via B-spline expansions.
Hu, Zhuohuan; Han, Zhengzhi; Tian, Zuohua
2009-10-01
This paper deals with the problem of fault detection and diagnosis (FDD) for singular stochastic systems. The outputs of singular stochastic systems are described by probability density functions (PDFs) based on square root B-spline expansions. Then, two non-linear observers are designed for the FDD. The conditions of stability of the correlative error estimation systems are given by using linear matrix inequalities (LMIs). Finally, the simulation results are presented to indicate that the approach can detect faults and estimate the size of faults.
NASA Technical Reports Server (NTRS)
Benton, E. R.; Kohl, Benjamin C.
1986-01-01
An optimum truncation level, N, in a spherical-harmonic analysis of the geomagnetic main field at the core-mantle boundary is determined by harmonic-spline analysis. Specifically, that value of N is found at which the two analyses are closest in a well defined sense and, for that value of N, the 'closeness' of two models is determined. Depending slightly on the definition of closeness, optimum N is found to be either 10 or 11. For those values the two analyses give remarkably similar results, showing that the conveniences of spherical harmonics can be retained with little penalty.
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1975-01-01
Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.
NASA Technical Reports Server (NTRS)
Mier Muth, A. M.; Willsky, A. S.
1978-01-01
In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.
A computational model of rat cerebral blood flow using non-uniform rational B-splines.
Pushkin, Sergey V; Podoprigora, Guennady I; Comas, Laurent; Boulahdour, Hatem; Cardot, Jean-Claude; Baud, Michel; Nartsissov, Yaroslav R; Blagosklonov, Oleg
2007-01-01
Non-Uniform Rational B-splines (NURBS) surfaces can be used for a computer simulation of shapes. Some anatomical models of human or animal structures have been recently developed on that basis. We used positron-emission tomography (PET) and computed tomography (CT) data for NURBS modeling of anatomical structures and isotope uptake in the rat brain. Our simplified model of the rat cerebral blood flow is the first step in a larger project aiming a simulation of PET scans in small animals followed by its validation in vivo.
Fully relativistic B-spline R-matrix calculations for electron collisions with mercury
NASA Astrophysics Data System (ADS)
Zatsarinny, Oleg; Bartschat, Klaus
2009-05-01
We have applied our recently developed fully relativistic Dirac B-spline R-matrix (DBSR) code [1] to calculate electron scattering from mercury atoms. Results from a 36-state close-coupling calculation are compared with numerous experimental benchmark data for angle-integrated and angle-differential cross sections, as well as spin-asymmetry, spin-polarization, and electron-impact coherence parameters. We generally obtain significant improvement in the agreement between experiment and theory compared to previous distorted-wave and close-coupling attempts. [1] O. Zatsarinny and K. Bartschat, Phys. Rev. A 77, 062701 (2008).
Accuracy enhancement of digital image correlation with B-spline interpolation
NASA Astrophysics Data System (ADS)
Luu, Long; Wang, Zhaoyang; Vo, Minh; Hoang, Thang; Ma, Jun
2011-08-01
The interpolation algorithm plays an essential role in the digital image correlation (DIC) technique for shape, deformation, and motion measurements with subpixel accuracies. At the present, little effort has been made to improve the interpolation methods used in DIC. In this Letter, a family of recursive interpolation schemes based on B-spline representation and its inverse gradient weighting version is employed to enhance the accuracy of DIC analysis. Theories are introduced, and simulation results are presented to illustrate the effectiveness of the method as compared with the common bicubic interpolation.
Uniform B-Spline Curve Interpolation with Prescribed Tangent and Curvature Vectors.
Okaniwa, Shoichi; Nasri, Ahmad; Lin, Hongwei; Abbas, Abdulwahed; Kineri, Yuki; Maekawa, Takashi
2012-09-01
This paper presents a geometric algorithm for the generation of uniform cubic B-spline curves interpolating a sequence of data points under tangent and curvature vectors constraints. To satisfy these constraints, knot insertion is used to generate additional control points which are progressively repositioned using corresponding geometric rules. Compared to existing schemes, our approach is capable of handling plane as well as space curves, has local control, and avoids the solution of the typical linear system. The effectiveness of the proposed algorithm is illustrated through several comparative examples. Applications of the method in NC machining and shape design are also outlined.
On spline and polynomial interpolation of low earth orbiter data: GRACE example
NASA Astrophysics Data System (ADS)
Uz, Metehan; Ustun, Aydin
2016-04-01
GRACE satellites, which are equipped with specific science instruments such as K/Ka band ranging system, have still orbited around the earth since 17 March 2002. In this study the kinematic and reduced-dynamic orbits of GRACE-A/B were determined to 10 seconds interval by using Bernese 5.2 GNSS software during May, 2010 and also daily orbit solutions were validated with GRACE science orbit, GNV1B. The RMS values of kinematic and reduced-dynamic orbit validations were about 2.5 and 1.5 cm, respectively. Throughout the time period of interest, more or less data gaps were encountered in the kinematic orbits due to lack of GPS measurements and satellite manoeuvres. Thus, the least square polynomial and the cubic spline approaches (natural, not-a-knot and clamped) were tested to interpolate both small data gaps and 5 second interval on precise orbits. The latter is necessary for example in case of data densification in order to use the K / Ka band observations. The interpolated coordinates to 5 second intervals were also validated with GNV1B orbits. The validation results show that spline approaches have delivered approximately 1 cm RMS values and are better than those of least square polynomial interpolation. When data gaps occur on daily orbit, the spline validation results became worse depending on the size of the data gaps. Hence, the daily orbits were fragmented into small arcs including 30, 40 or 50 knots to evaluate effect of the least square polynomial interpolation on data gaps. From randomly selected daily arc sets, which are belonging to different times, 5, 10, 15 and 20 knots were removed, independently. While 30-knot arcs were evaluated with fifth-degree polynomial, sixth-degree polynomial was employed to interpolate artificial gaps over 40- and 50-knot arcs. The differences of interpolated and removed coordinates were tested with each other by considering GNV1B validation RMS result, 2.5 cm. With 95% confidence level, data gaps up to 5 and 10 knots can
Deng, Shirong; Liu, Li; Zhao, Xingqiu
2015-09-01
This article discusses the statistical analysis of panel count data when the underlying recurrent event process and observation process may be correlated. For the recurrent event process, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates. For inference on the model parameters, a monotone spline-based least squares estimation approach is developed, and the resulting estimators are consistent and asymptotically normal. In particular, our new approach does not rely on the model specification of the observation process. The proposed inference procedure performs well through simulation studies, and it is illustrated by the analysis of bladder tumor data.
Transfer coefficients for evaporation of a system with a Lennard-Jones long-range spline potential.
Ge, Jialin; Kjelstrup, S; Bedeaux, D; Simon, J M; Rousseau, B
2007-06-01
Surface transfer coefficients are determined by nonequilibrium molecular dynamics simulations for a Lennard-Jones fluid with a long-range spline potential. In earlier work [A. Røsjorde, J. Colloid Interface Sci. 240, 355 (2001); J. Xu, ibid. 299, 452 (2006)], using a short-range Lennard-Jones spline potential, it was found that the resistivity coefficients to heat and mass transfer agreed rather well with the values predicted by kinetic theory. For the long-range Lennard-Jones spline potential considered in this paper we find significant discrepancies from the values predicted by kinetic theory. In particular the coupling coefficient, and as a consequence the heat of transfer on the vapor side of the surface are much larger. Thermodynamic data for the liquid-vapor equilibrium confirmed the law of corresponding states for the surface, when it is described as an autonomous system. The importance of these findings for modelling phase transitions is discussed.
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program
2013-04-01
October 1983) . 2. R. F . Riesenfeld, E. Cohen, T . Lyche and C. deBoor, "A Practical Guide to Splines," Com- puter Graphics and Image Processing, New...8. M . A . J. Sweeny, R. H. Bartels, "Ray Tracing Free-Form B-spline Surfaces," IEEE Com- puter Graphics (February 1986). 9. John W. Peterson, "Ray...Geometric Desgin 1(1) (1984) . 13. J. M . Snyder, A H. Barr, "Ray Tracing Complex Models Containing Surface Tessellations," Computer Graphics (Proceedings
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods.
NASA Technical Reports Server (NTRS)
Ku, C.-P. Roger; Walton, James F., Jr.; Lund, Jorgen W.
1994-01-01
This paper provided an opportunity to quantify the angular stiffness and equivalent viscous damping coefficients of an axial spline coupling used in high-speed turbomachinery. A unique test methodology and data reduction procedures were developed. The bending moments and angular deflections transmitted across an axial spline coupling were measured while a nonrotating shaft was excited by an external shaker. A rotor dynamics computer program was used to simulate the test conditions and to correlate the angular stiffness and damping coefficients. In addition, sensitivity analyses were performed to show that the accuracy of the dynamic coefficients do not rely on the accuracy of the data reduction procedures.
Railroad inspection based on ACFM employing a non-uniform B-spline approach
NASA Astrophysics Data System (ADS)
Chacón Muñoz, J. M.; García Márquez, F. P.; Papaelias, M.
2013-11-01
The stresses sustained by rails have increased in recent years due to the use of higher train speeds and heavier axle loads. For this reason surface and near-surface defects generate by Rolling Contact Fatigue (RCF) have become particularly significant as they can cause unexpected structural failure of the rail, resulting in severe derailments. The accident that took place in Hatfield, UK (2000), is an example of a derailment caused by the structural failure of a rail section due to RCF. Early detection of RCF rail defects is therefore of paramount importance to the rail industry. The performance of existing ultrasonic and magnetic flux leakage techniques in detecting rail surface-breaking defects, such as head checks and gauge corner cracking, is inadequate during high-speed inspection, while eddy current sensors suffer from lift-off effects. The results obtained through rail inspection experiments under simulated conditions using Alternating Current Field Measurement (ACFM) probes, suggest that this technique can be applied for the accurate and reliable detection of surface-breaking defects at high inspection speeds. This paper presents the B-Spline approach used for the accurate filtering the noise of the raw ACFM signal obtained during high speed tests to improve the reliability of the measurements. A non-uniform B-spline approximation is employed to calculate the exact positions and the dimensions of the defects. This method generates a smooth approximation similar to the ACFM dataset points related to the rail surface-breaking defect.
DBSR_HF: A B-spline Dirac-Hartree-Fock program
NASA Astrophysics Data System (ADS)
Zatsarinny, Oleg; Froese Fischer, Charlotte
2016-05-01
A B-spline version of a general Dirac-Hartree-Fock program is described. The usual differential equations are replaced by a set of generalized eigenvalue problems of the form (Ha -εa B) Pa = 0, where Ha and B are the Hamiltonian and overlap matrices, respectively, and Pa is the two-component relativistic orbit in the B-spline basis. A default universal grid allows for flexible adjustment to different nuclear models. When two orthogonal orbitals are both varied, the energy must also be stationary with respect to orthonormal transformations. At such a stationary point the off-diagonal Lagrange multipliers may be eliminated through projection operators. The self-consistent field procedure exhibits excellent convergence. Several atomic states can be considered simultaneously, including some configuration-interaction calculations. The program provides several options for the treatment of Breit interaction and QED corrections. The information about atoms up to Z = 104 is stored by the program. Along with a simple interface through command-line arguments, this information allows the user to run the program with minimal initial preparations.
Friedline, Terri; Masa, Rainier D; Chowa, Gina A N
2015-01-01
The natural log and categorical transformations commonly applied to wealth for meeting the statistical assumptions of research may not always be appropriate for adjusting for skewness given wealth's unique properties. Finding and applying appropriate transformations is becoming increasingly important as researchers consider wealth as a predictor of well-being. We present an alternative transformation-the inverse hyperbolic sine (IHS)-for simultaneously dealing with skewness and accounting for wealth's unique properties. Using the relationship between household wealth and youth's math achievement as an example, we apply the IHS transformation to wealth data from US and Ghanaian households. We also explore non-linearity and accumulation thresholds by combining IHS transformed wealth with splines. IHS transformed wealth relates to youth's math achievement similarly when compared to categorical and natural log transformations, indicating that it is a viable alternative to other transformations commonly used in research. Non-linear relationships and accumulation thresholds emerge that predict youth's math achievement when splines are incorporated. In US households, accumulating debt relates to decreases in math achievement whereas accumulating assets relates to increases in math achievement. In Ghanaian households, accumulating assets between the 25th and 50th percentiles relates to increases in youth's math achievement.
NASA Astrophysics Data System (ADS)
Marghany, Maged
2014-06-01
A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.
Intensity Conserving Spline Interpolation (ICSI): A New Tool for Spectroscopic Analysis
NASA Astrophysics Data System (ADS)
Klimchuk, James A.; Patsourakos, Spiros; Tripathi, Durgesh
2015-04-01
Spectroscopy is an extremely powerful tool for diagnosing astrophysical and other plasmas. For example, the shapes of line profiles provide valuable information on the distribution of velocities along an optically thin line-of-sight and across the finite area of a resolution element. A number of recent studies have measured the asymmetries of line profiles in order to detect faint high-speed upflows, perhaps associated with coronal nanoflares or perhaps associated with chromospheric nanoflares and type II spicules. Over most of the Sun, these asymmetries are very subtle, so great care must be taken. A common technique is to perform a spline fit of the points in the profile in order to extract information at a spectral resolution higher than that of the original data. However, a fundamental problem is that the fits do not conserve intensity. We have therefore developed an iterative procedure called Intensity Conserving Spline Interpolation that does preserve the observed intensity within each wavelength bin. It improves the measurement of line asymmetries and can also help with the determination of line blends.
Quantification of the spatial strain distribution of scoliosis using a thin-plate spline method.
Kiriyama, Yoshimori; Watanabe, Kota; Matsumoto, Morio; Toyama, Yoshiaki; Nagura, Takeo
2014-01-03
The objective of this study was to quantify the three-dimensional spatial strain distribution of a scoliotic spine by nonhomogeneous transformation without using a statistically averaged reference spine. The shape of the scoliotic spine was determined from computed tomography images from a female patient with adolescent idiopathic scoliosis. The shape of the scoliotic spine was enclosed in a rectangular grid, and symmetrized using a thin-plate spline method according to the node positions of the grid. The node positions of the grid were determined by numerical optimization to satisfy symmetry. The obtained symmetric spinal shape was enclosed within a new rectangular grid and distorted back to the original scoliotic shape using a thin-plate spline method. The distorted grid was compared to the rectangular grid that surrounded the symmetrical spine. Cobb's angle was reduced from 35° in the scoliotic spine to 7° in the symmetrized spine, and the scoliotic shape was almost fully symmetrized. The scoliotic spine showed a complex Green-Lagrange strain distribution in three dimensions. The vertical and transverse compressive/tensile strains in the frontal plane were consistent with the major scoliotic deformation. The compressive, tensile and shear strains on the convex side of the apical vertebra were opposite to those on the concave side. These results indicate that the proposed method can be used to quantify the three-dimensional spatial strain distribution of a scoliotic spine, and may be useful in quantifying the deformity of scoliosis.
Fitting Cox Models with Doubly Censored Data Using Spline-Based Sieve Marginal Likelihood.
Li, Zhiguo; Owzar, Kouros
2016-06-01
In some applications, the failure time of interest is the time from an originating event to a failure event, while both event times are interval censored. We propose fitting Cox proportional hazards models to this type of data using a spline-based sieve maximum marginal likelihood, where the time to the originating event is integrated out in the empirical likelihood function of the failure time of interest. This greatly reduces the complexity of the objective function compared with the fully semiparametric likelihood. The dependence of the time of interest on time to the originating event is induced by including the latter as a covariate in the proportional hazards model for the failure time of interest. The use of splines results in a higher rate of convergence of the estimator of the baseline hazard function compared with the usual nonparametric estimator. The computation of the estimator is facilitated by a multiple imputation approach. Asymptotic theory is established and a simulation study is conducted to assess its finite sample performance. It is also applied to analyzing a real data set on AIDS incubation time.
[Baseline correction method for Raman spectroscopy based on B-spline fitting].
Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wu, Jing-lin; Liang, Jun; Zuo, Yong
2014-08-01
Baseline drift is a widespread phenomenon in modern spectroscopy instrumentation, which would bring a very negative impact to the feature extraction of spectrum signal, and the baseline correction method is an important means to solve the problem, which is also the important part of Raman signal preprocessing. The general principle of baseline drift elimination is using the fitting method to the fit the baseline. The traditional fitting method is polynomial fitting, but this method is prone to over-fitting and under-fitting, and the fitting order is difficult to be determined. In this paper, the traditional method is improved; the B-spline fitting method is used to approach the baseline of Raman signal through constant iteration The advantages of B-spline, namely low-order and smoothness, can help the method overcome the shortcomings of polynomial method. In the experiments, the Raman signal of malachite green and rhodamine B were detected, and then the proposed method and traditional method were applied to perform baseline correction Experimental results showed that the proposed method can eliminate the Raman signal baseline drift effectively without over- and under-fitting, and the same order can be used in both positions where large or small baseline drift occurred. Therefore, the proposed method provided more accurate and reliable information for the further analysis of spectral data.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation.
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.
Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado
2012-01-01
In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.
De la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Lee, Dae-Jin; Arribas-Gil, Ana
2017-02-19
We propose a semiparametric nonlinear mixed-effects model (SNMM) using penalized splines to classify longitudinal data and improve the prediction of a binary outcome. The work is motivated by a study in which different hormone levels were measured during the early stages of pregnancy, and the challenge is using this information to predict normal versus abnormal pregnancy outcomes. The aim of this paper is to compare models and estimation strategies on the basis of alternative formulations of SNMMs depending on the characteristics of the data set under consideration. For our motivating example, we address the classification problem using a particular case of the SNMM in which the parameter space has a finite dimensional component (fixed effects and variance components) and an infinite dimensional component (unknown function) that need to be estimated. The nonparametric component of the model is estimated using penalized splines. For the parametric component, we compare the advantages of using random effects versus direct modeling of the correlation structure of the errors. Numerical studies show that our approach improves over other existing methods for the analysis of this type of data. Furthermore, the results obtained using our method support the idea that explicit modeling of the serial correlation of the error term improves the prediction accuracy with respect to a model with random effects, but independent errors. Copyright © 2017 John Wiley & Sons, Ltd.
Generalized B-spline subdivision-surface wavelets for geometry compression.
Bertram, Martin; Duchaineau, Mark A; Hamann, Bernd; Joy, Kenneth I
2004-01-01
We present a new construction of lifted biorthogonal wavelets on surfaces of arbitrary two-manifold topology for compression and multiresolution representation. Our method combines three approaches: subdivision surfaces of arbitrary topology, B-spline wavelets, and the lifting scheme for biorthogonal wavelet construction. The simple building blocks of our wavelet transform are local lifting operations performed on polygonal meshes with subdivision hierarchy. Starting with a coarse, irregular polyhedral base mesh, our transform creates a subdivision hierarchy of meshes converging to a smooth limit surface. At every subdivision level, geometric detail can be expanded from wavelet coefficients and added to the surface. We present wavelet constructions for bilinear, bicubic, and biquintic B-Spline subdivision. While the bilinear and bicubic constructions perform well in numerical experiments, the biquintic construction turns out to be unstable. For lossless compression, our transform can be computed in integer arithmetic, mapping integer coordinates of control points to integer wavelet coefficients. Our approach provides a highly efficient and progressive representation for complex geometries of arbitrary topology.