NASA Technical Reports Server (NTRS)
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines
NASA Astrophysics Data System (ADS)
Haghiabi, Amir Hamzeh
2016-07-01
In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computing technique for predicting longitudinal dispersion coefficient ( D L ) in rivers. As mentioned in the literature, experimental dataset related to D L was collected and used for preparing MARS model. Results of MARS model were compared with multi-layer neural network model and empirical formulas. To define the most effective parameters on D L , the Gamma test was used. Performance of MARS model was assessed by calculation of standard error indices. Error indices showed that MARS model has suitable performance and is more accurate compared to multi-layer neural network model and empirical formulas. Results of the Gamma test and MARS model showed that flow depth ( H) and ratio of the mean velocity to shear velocity ( u/ u ∗) were the most effective parameters on the D L .
Technology Transfer Automated Retrieval System (TEKTRAN)
Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
NASA Astrophysics Data System (ADS)
Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza
2014-10-01
The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of
Gries, J M; Verotta, D
2000-08-01
In a frequently performed pharmacokinetics study, different subjects are given different doses of a drug. After each dose is given, drug concentrations are observed according to the same sampling design. The goal of the experiment is to obtain a representation for the pharmacokinetics of the drug, and to determine if drug concentrations observed at different times after a dose are linear in respect to dose. The goal of this paper is to obtain a representation for concentration as a function of time and dose, which (a) makes no assumptions on the underlying pharmacokinetics of the drug; (b) takes into account the repeated measure structure of the data; and (c) detects nonlinearities in respect to dose. To address (a) we use a multivariate adaptive regression splines representation (MARS), which we recast into a linear mixed-effects model, addressing (b). To detect nonlinearity we describe a general algorithm that obtains nested (mixed-effect) MARS representations. In the pharmacokinetics application, the algorithm obtains representations containing time, and time and dose, respectively, with the property that the bases functions of the first representation are a subset of the second. Standard statistical model selection criteria are used to select representations linear or nonlinear in respect to dose. The method can be applied to a variety of pharmacokinetics (and pharmacodynamic) preclinical and phase I-III trials. Examples of applications of the methodology to real and simulated data are reported.
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora
2009-01-01
This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant cha...
NASA Astrophysics Data System (ADS)
Ghasemi, Jahan B.; Zolfonoun, Ehsan
2013-11-01
A new multicomponent analysis method, based on principal component analysis-multivariate adaptive regression splines (PC-MARS) is proposed for the determination of dialkyltin compounds. In Tween-20 micellar media, dimethyl and dibutyltin react with morin to give fluorescent complexes with the maximum emission peaks at 527 and 520 nm, respectively. The spectrofluorimetric matrix data, before building the MARS models, were subjected to principal component analysis and decomposed to PC scores as starting points for the MARS algorithm. The algorithm classifies the calibration data into several groups, in each a regression line or hyperplane is fitted. Performances of the proposed methods were tested in term of root mean square errors of prediction (RMSEP), using synthetic solutions. The results show the strong potential of PC-MARS, as a multivariate calibration method, to be applied to spectral data for multicomponent determinations. The effect of different experimental parameters on the performance of the method were studied and discussed. The prediction capability of the proposed method compared with GC-MS method for determination of dimethyltin and/or dibutyltin.
NASA Astrophysics Data System (ADS)
Emamgolizadeh, S.; Bateni, S. M.; Shahsavani, D.; Ashrafi, T.; Ghorbani, H.
2015-10-01
The soil cation exchange capacity (CEC) is one of the main soil chemical properties, which is required in various fields such as environmental and agricultural engineering as well as soil science. In situ measurement of CEC is time consuming and costly. Hence, numerous studies have used traditional regression-based techniques to estimate CEC from more easily measurable soil parameters (e.g., soil texture, organic matter (OM), and pH). However, these models may not be able to adequately capture the complex and highly nonlinear relationship between CEC and its influential soil variables. In this study, Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS) were employed to estimate CEC from more readily measurable soil physical and chemical variables (e.g., OM, clay, and pH) by developing functional relations. The GEP- and MARS-based functional relations were tested at two field sites in Iran. Results showed that GEP and MARS can provide reliable estimates of CEC. Also, it was found that the MARS model (with root-mean-square-error (RMSE) of 0.318 Cmol+ kg-1 and correlation coefficient (R2) of 0.864) generated slightly better results than the GEP model (with RMSE of 0.270 Cmol+ kg-1 and R2 of 0.807). The performance of GEP and MARS models was compared with two existing approaches, namely artificial neural network (ANN) and multiple linear regression (MLR). The comparison indicated that MARS and GEP outperformed the MLP model, but they did not perform as good as ANN. Finally, a sensitivity analysis was conducted to determine the most and the least influential variables affecting CEC. It was found that OM and pH have the most and least significant effect on CEC, respectively.
Xu, A; Zhang, Y; Ran, T; Liu, H; Lu, S; Xu, J; Xiong, X; Jiang, Y; Lu, T; Chen, Y
2015-01-01
Bruton's tyrosine kinase (BTK) plays a crucial role in B-cell activation and development, and has emerged as a new molecular target for the treatment of autoimmune diseases and B-cell malignancies. In this study, two- and three-dimensional quantitative structure-activity relationship (2D and 3D-QSAR) analyses were performed on a series of pyridine and pyrimidine-based BTK inhibitors by means of genetic algorithm optimized multivariate adaptive regression spline (GA-MARS) and comparative molecular similarity index analysis (CoMSIA) methods. Here, we propose a modified MARS algorithm to develop 2D-QSAR models. The top ranked models showed satisfactory statistical results (2D-QSAR: Q(2) = 0.884, r(2) = 0.929, r(2)pred = 0.878; 3D-QSAR: q(2) = 0.616, r(2) = 0.987, r(2)pred = 0.905). Key descriptors selected by 2D-QSAR were in good agreement with the conclusions of 3D-QSAR, and the 3D-CoMSIA contour maps facilitated interpretation of the structure-activity relationship. A new molecular database was generated by molecular fragment replacement (MFR) and further evaluated with GA-MARS and CoMSIA prediction. Twenty-five pyridine and pyrimidine derivatives as novel potential BTK inhibitors were finally selected for further study. These results also demonstrated that our method can be a very efficient tool for the discovery of novel potent BTK inhibitors.
A Spline Regression Model for Latent Variables
ERIC Educational Resources Information Center
Harring, Jeffrey R.
2014-01-01
Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Parmar, Kulwinder Singh
2016-03-01
This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.
Garcia Nieto, P J; Sánchez Lasheras, F; de Cos Juez, F J; Alonso Fernández, J R
2011-11-15
There is an increasing need to describe cyanobacteria blooms since some cyanobacteria produce toxins, termed cyanotoxins. These latter can be toxic and dangerous to humans as well as other animals and life in general. It must be remarked that the cyanobacteria are reproduced explosively under certain conditions. This results in algae blooms, which can become harmful to other species if the cyanobacteria involved produce cyanotoxins. In this research work, the evolution of cyanotoxins in Trasona reservoir (Principality of Asturias, Northern Spain) was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. The results of the present study are two-fold. On one hand, the importance of the different kind of cyanobacteria over the presence of cyanotoxins in the reservoir is presented through the MARS model and on the other hand a predictive model able to forecast the possible presence of cyanotoxins in a short term was obtained. The agreement of the MARS model with experimental data confirmed the good performance of the same one. Finally, conclusions of this innovative research are exposed. PMID:21920665
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur
2015-04-01
There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.
Butte, Nancy F.; Wong, William W.; Adolph, Anne L.; Puyau, Maurice R.; Vohra, Firoz A.; Zakeri, Issa F.
2010-01-01
Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant characteristics, heart rate (HR), and accelerometer counts (AC) for prediction of minute-by-minute EE, and hence 24-h total EE (TEE), against a 7-d doubly labeled water (DLW) method in children and adolescents. Our secondary aim was to demonstrate the utility of CSTS and MARS to predict awake EE, sleep EE, and activity EE (AEE) from 7-d HR and AC records, because these shorter periods are not verifiable by DLW, which provides an estimate of the individual's mean TEE over a 7-d interval. CSTS and MARS models were validated in 60 normal-weight and overweight participants (ages 5–18 y). The Actiheart monitor was used to simultaneously measure HR and AC. For prediction of TEE, mean absolute errors were 10.7 ± 307 kcal/d and 18.7 ± 252 kcal/d for CSTS and MARS models, respectively, relative to DLW. Corresponding root mean square error values were 305 and 251 kcal/d for CSTS and MARS models, respectively. Bland-Altman plots indicated that the predicted values were in good agreement with the DLW-derived TEE values. Validation of CSTS and MARS models based on participant characteristics, HR monitoring, and accelerometry for the prediction of minute-by-minute EE, and hence 24-h TEE, against the DLW method indicated no systematic bias and acceptable limits of agreement for pediatric groups and individuals under free-living conditions. PMID:20573939
Penalized spline estimation for functional coefficient regression models
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z.
2011-01-01
The functional coefficient regression models assume that the regression coefficients vary with some “threshold” variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called “curse of dimensionality” in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application. PMID:21516260
Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.
2009-01-01
Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional
Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S
2014-07-01
In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield.
Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression
NASA Astrophysics Data System (ADS)
Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.
2015-12-01
Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.
Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling
NASA Astrophysics Data System (ADS)
Wu, Xiaofang; Shi, Jianghong
In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.
Algebraic grid adaptation method using non-uniform rational B-spline surface modeling
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, B. K.
1992-01-01
An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.
Response-adaptive regression for longitudinal data.
Wu, Shuang; Müller, Hans-Georg
2011-09-01
We propose a response-adaptive model for functional linear regression, which is adapted to sparsely sampled longitudinal responses. Our method aims at predicting response trajectories and models the regression relationship by directly conditioning the sparse and irregular observations of the response on the predictor, which can be of scalar, vector, or functional type. This obliterates the need to model the response trajectories, a task that is challenging for sparse longitudinal data and was previously required for functional regression implementations for longitudinal data. The proposed approach turns out to be superior compared to previous functional regression approaches in terms of prediction error. It encompasses a variety of regression settings that are relevant for the functional modeling of longitudinal data in the life sciences. The improved prediction of response trajectories with the proposed response-adaptive approach is illustrated for a longitudinal study of Kiwi weight growth and by an analysis of the dynamic relationship between viral load and CD4 cell counts observed in AIDS clinical trials. PMID:21133880
A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline
NASA Astrophysics Data System (ADS)
Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong
2015-11-01
The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.
Brown, C; Adcock, A; Azevedo, S; Liebman, J; Bond, E
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.
NASA Astrophysics Data System (ADS)
Zhang, X.; Liang, S.; Wang, G.
2015-12-01
Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.
Adaptive Confidence Bands for Nonparametric Regression Functions
Cai, T. Tony; Low, Mark; Ma, Zongming
2014-01-01
A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models. PMID:26269661
NASA Astrophysics Data System (ADS)
Luo, G. Y.; Osypiw, D.; Irle, M.
2003-05-01
The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.
The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...
Michna, Agata; Braselmann, Herbert; Selmansberger, Martin; Dietz, Anne; Hess, Julia; Gomolka, Maria; Hornhardt, Sabine; Blüthgen, Nils; Zitzelsberger, Horst; Unger, Kristian
2016-01-01
Gene expression time-course experiments allow to study the dynamics of transcriptomic changes in cells exposed to different stimuli. However, most approaches for the reconstruction of gene association networks (GANs) do not propose prior-selection approaches tailored to time-course transcriptome data. Here, we present a workflow for the identification of GANs from time-course data using prior selection of genes differentially expressed over time identified by natural cubic spline regression modeling (NCSRM). The workflow comprises three major steps: 1) the identification of differentially expressed genes from time-course expression data by employing NCSRM, 2) the use of regularized dynamic partial correlation as implemented in GeneNet to infer GANs from differentially expressed genes and 3) the identification and functional characterization of the key nodes in the reconstructed networks. The approach was applied on a time-resolved transcriptome data set of radiation-perturbed cell culture models of non-tumor cells with normal and increased radiation sensitivity. NCSRM detected significantly more genes than another commonly used method for time-course transcriptome analysis (BETR). While most genes detected with BETR were also detected with NCSRM the false-detection rate of NCSRM was low (3%). The GANs reconstructed from genes detected with NCSRM showed a better overlap with the interactome network Reactome compared to GANs derived from BETR detected genes. After exposure to 1 Gy the normal sensitive cells showed only sparse response compared to cells with increased sensitivity, which exhibited a strong response mainly of genes related to the senescence pathway. After exposure to 10 Gy the response of the normal sensitive cells was mainly associated with senescence and that of cells with increased sensitivity with apoptosis. We discuss these results in a clinical context and underline the impact of senescence-associated pathways in acute radiation response of normal
Hypnotizability as a Function of Repression, Adaptive Regression, and Mood
ERIC Educational Resources Information Center
Silver, Maurice Joseph
1974-01-01
Forty male undergraduates were assessed in a personality assessment session and a hypnosis session. The personality traits studied were repressive style and adaptive regression, while the transitory variable was mood prior to hypnosis. Hypnotizability was a significant interactive function of repressive style and mood, but not of adaptive…
Adaptive support vector regression for UAV flight control.
Shin, Jongho; Jin Kim, H; Kim, Youdan
2011-01-01
This paper explores an application of support vector regression for adaptive control of an unmanned aerial vehicle (UAV). Unlike neural networks, support vector regression (SVR) generates global solutions, because SVR basically solves quadratic programming (QP) problems. With this advantage, the input-output feedback-linearized inverse dynamic model and the compensation term for the inversion error are identified off-line, which we call I-SVR (inversion SVR) and C-SVR (compensation SVR), respectively. In order to compensate for the inversion error and the unexpected uncertainty, an online adaptation algorithm for the C-SVR is proposed. Then, the stability of the overall error dynamics is analyzed by the uniformly ultimately bounded property in the nonlinear system theory. In order to validate the effectiveness of the proposed adaptive controller, numerical simulations are performed on the UAV model.
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
NASA Technical Reports Server (NTRS)
Zhang, Zhimin; Tomlinson, John; Martin, Clyde
1994-01-01
In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.
Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello
2014-11-01
Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (P<0.001 vs. females). Overall, our study showed that developmental exposures to low-doses of bisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring.
Adaptive sparse polynomial chaos expansion based on least angle regression
NASA Astrophysics Data System (ADS)
Blatman, Géraud; Sudret, Bruno
2011-03-01
Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin type) or non intrusive) unaffordable when the deterministic finite element model is expensive to evaluate. To address such problems, the paper describes a non intrusive method that builds a sparse PC expansion. First, an original strategy for truncating the PC expansions, based on hyperbolic index sets, is proposed. Then an adaptive algorithm based on least angle regression (LAR) is devised for automatically detecting the significant coefficients of the PC expansion. Beside the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to avoid the overfitting phenomenon. The accuracy of the PC metamodel is checked using an estimate inspired by statistical learning theory, namely the corrected leave-one-out error. As a consequence, a rather small number of PC terms are eventually retained ( sparse representation), which may be obtained at a reduced computational cost compared to the classical "full" PC approximation. The convergence of the algorithm is shown on an analytical function. Then the method is illustrated on three stochastic finite element problems. The first model features 10 input random variables, whereas the two others involve an input random field, which is discretized into 38 and 30 - 500 random variables, respectively.
Technology Transfer Automated Retrieval System (TEKTRAN)
Free-living measurements of 24-h total energy expenditure (TEE) and activity energy expenditure (AEE) are required to better understand the metabolic, physiological, behavioral, and environmental factors affecting energy balance and contributing to the global epidemic of childhood obesity. The spec...
Dissociating Conflict Adaptation from Feature Integration: A Multiple Regression Approach
ERIC Educational Resources Information Center
Notebaert, Wim; Verguts, Tom
2007-01-01
Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on…
Adaptation of a weighted regression approach to evaluate water quality trends in anestuary
To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...
Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary
To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...
Pax6 in Collembola: Adaptive Evolution of Eye Regression.
Hou, Ya-Nan; Li, Sheng; Luan, Yun-Xia
2016-01-01
Unlike the compound eyes in insects, collembolan eyes are comparatively simple: some species have eyes with different numbers of ocelli (1 + 1 to 8 + 8), and some species have no apparent eye structures. Pax6 is a universal master control gene for eye morphogenesis. In this study, full-length Pax6 cDNAs, Fc-Pax6 and Cd-Pax6, were cloned from an eyeless collembolan (Folsomia candida, soil-dwelling) and an eyed one (Ceratophysella denticulata, surface-dwelling), respectively. Their phylogenetic positions are between the two Pax6 paralogs in insects, eyeless (ey) and twin of eyeless (toy), and their protein sequences are more similar to Ey than to Toy. Both Fc-Pax6 and Cd-Pax6 could induce ectopic eyes in Drosophila, while Fc-Pax6 exhibited much weaker transactivation ability than Cd-Pax6. The C-terminus of collembolan Pax6 is indispensable for its transactivation ability, and determines the differences of transactivation ability between Fc-Pax6 and Cd-Pax6. One of the possible reasons is that Fc-Pax6 accumulated more mutations at some key functional sites of C-terminus under a lower selection pressure on eye development due to the dark habitats of F. candida. The composite data provide a first molecular evidence for the monophyletic origin of collembolan eyes, and indicate the eye degeneration of collembolans is caused by adaptive evolution. PMID:26856893
Pax6 in Collembola: Adaptive Evolution of Eye Regression
Hou, Ya-Nan; Li, Sheng; Luan, Yun-Xia
2016-01-01
Unlike the compound eyes in insects, collembolan eyes are comparatively simple: some species have eyes with different numbers of ocelli (1 + 1 to 8 + 8), and some species have no apparent eye structures. Pax6 is a universal master control gene for eye morphogenesis. In this study, full-length Pax6 cDNAs, Fc-Pax6 and Cd-Pax6, were cloned from an eyeless collembolan (Folsomia candida, soil-dwelling) and an eyed one (Ceratophysella denticulata, surface-dwelling), respectively. Their phylogenetic positions are between the two Pax6 paralogs in insects, eyeless (ey) and twin of eyeless (toy), and their protein sequences are more similar to Ey than to Toy. Both Fc-Pax6 and Cd-Pax6 could induce ectopic eyes in Drosophila, while Fc-Pax6 exhibited much weaker transactivation ability than Cd-Pax6. The C-terminus of collembolan Pax6 is indispensable for its transactivation ability, and determines the differences of transactivation ability between Fc-Pax6 and Cd-Pax6. One of the possible reasons is that Fc-Pax6 accumulated more mutations at some key functional sites of C-terminus under a lower selection pressure on eye development due to the dark habitats of F. candida. The composite data provide a first molecular evidence for the monophyletic origin of collembolan eyes, and indicate the eye degeneration of collembolans is caused by adaptive evolution. PMID:26856893
Guo, Yi; Errichello, Robert
2013-08-29
An analytical model is developed to evaluate the design of a spline coupling. For a given torque and shaft misalignment, the model calculates the number of teeth in contact, tooth loads, stiffnesses, stresses, and safety factors. The analytic model provides essential spline coupling design and modeling information and could be easily integrated into gearbox design and simulation tools.
Improved Estimation of Earth Rotation Parameters Using the Adaptive Ridge Regression
NASA Astrophysics Data System (ADS)
Huang, Chengli; Jin, Wenjing
1998-05-01
The multicollinearity among regression variables is a common phenomenon in the reduction of astronomical data. The phenomenon of multicollinearity and the diagnostic factors are introduced first. As a remedy, a new method, called adaptive ridge regression (ARR), which is an improved method of choosing the departure constant θ in ridge regression, is suggested and applied in a case that the Earth orientation parameters (EOP) are determined by lunar laser ranging (LLR). It is pointed out, via a diagnosis, the variance inflation factors (VIFs), that there exists serious multicollinearity among the regression variables. It is shown that the ARR method is effective in reducing the multicollinearity and makes the regression coefficients more stable than that of using ordinary least squares estimation (LS), especially when there is serious multicollinearity.
Algamal, Zakariya Yahya; Lee, Muhammad Hisyam
2015-12-01
Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1994-01-01
Scientific data often contains random errors that make plotting and curve-fitting difficult. The Rational-Spline Approximation with Automatic Tension Adjustment algorithm lead to a flexible, smooth representation of experimental data. The user sets the conditions for each consecutive pair of knots:(knots are user-defined divisions in the data set) to apply no tension; to apply fixed tension; or to determine tension with a tension adjustment algorithm. The user also selects the number of knots, the knot abscissas, and the allowed maximum deviations from line segments. The selection of these quantities depends on the actual data and on the requirements of a particular application. This program differs from the usual spline under tension in that it allows the user to specify different tension values between each adjacent pair of knots rather than a constant tension over the entire data range. The subroutines use an automatic adjustment scheme that varies the tension parameter for each interval until the maximum deviation of the spline from the line joining the knots is less than or equal to a user-specified amount. This procedure frees the user from the drudgery of adjusting individual tension parameters while still giving control over the local behavior of the spline The Rational Spline program was written completely in FORTRAN for implementation on a CYBER 850 operating under NOS. It has a central memory requirement of approximately 1500 words. The program was released in 1988.
Clinical Trials: Spline Modeling is Wonderful for Nonlinear Effects.
Cleophas, Ton J
2016-01-01
Traditionally, nonlinear relationships like the smooth shapes of airplanes, boats, and motor cars were constructed from scale models using stretched thin wooden strips, otherwise called splines. In the past decades, mechanical spline methods have been replaced with their mathematical counterparts. The objective of the study was to study whether spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial and also to study whether it can detect patterns in a trial that are relevant but go unobserved with simpler regression models. A clinical trial assessing the effect of quantity of care on quality of care was used as an example. Spline curves consistent of 4 or 5 cubic functions were applied. SPSS statistical software was used for analysis. The spline curves of our data outperformed the traditional curves because (1) unlike the traditional curves, they did not miss the top quality of care given in either subgroup, (2) unlike the traditional curves, they, rightly, did not produce sinusoidal patterns, and (3) unlike the traditional curves, they provided a virtually 100% match of the original values. We conclude that (1) spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial; (2) spline modeling can detect patterns in a trial that are relevant but may go unobserved with simpler regression models; (3) in clinical research, spline modeling has great potential given the presence of many nonlinear effects in this field of research and given its sophisticated mathematical refinement to fit any nonlinear effect in the mostly accurate way; and (4) spline modeling should enable to improve making predictions from clinical research for the benefit of health decisions and health care. We hope that this brief introduction to spline modeling will stimulate clinical investigators to start using this wonderful method.
Interchangeable spline reference guide
Dolin, R.M.
1994-05-01
The WX-Division Integrated Software Tools (WIST) Team evolved from two previous committees, First was the W78 Solid Modeling Pilot Project`s Spline Subcommittee, which later evolved into the Vv`X-Division Spline Committee. The mission of the WIST team is to investigate current CAE engineering processes relating to complex geometry and to develop methods for improving those processes. Specifically, the WIST team is developing technology that allows the Division to use multiple spline representations. We are also updating the contour system (CONSYS) data base to take full advantage of the Division`s expanding electronic engineering process. Both of these efforts involve developing interfaces to commercial CAE systems and writing new software. The WIST team is comprised of members from V;X-11, -12 and 13. This {open_quotes}cross-functional{close_quotes} approach to software development is somewhat new in the Division so an effort is being made to formalize our processes and assure quality at each phase of development. Chapter one represents a theory manual and is one phase of the formal process. The theory manual is followed by a software requirements document, specification document, software verification and validation documents. The purpose of this guide is to present the theory underlying the interchangeable spline technology and application. Verification and validation test results are also presented for proof of principal.
NASA Astrophysics Data System (ADS)
Börger, Klaus; Schmidt, Michael; Dettmering, Denise; Limberger, Marco; Erdogan, Eren; Seitz, Florian; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte; Mrotzek, Niclas
2016-04-01
Today, the observations of space geodetic techniques are usually available with a rather low latency which applies to space missions observing the solar terrestrial environment, too. Therefore, we can use all these measurements in near real-time to compute and to provide ionosphere information, e.g. the vertical total electron content (VTEC). GSSAC and BGIC support a project aiming at a service for providing ionosphere information. This project is called OPTIMAP, meaning "Operational Tool for Ionosphere Mapping and Prediction"; the scientific work is mainly done by the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics of the University of Goettingen (IAG). The OPTIMAP strategy for providing ionosphere target quantities of high quality, such as VTEC or the electron density, includes mathematical approaches and tools allowing for the model adaptation to the real observational scenario as a significant improvement w.r.t. the traditional well-established methods. For example, OPTIMAP combines different observation types such as GNSS (GPS, GLONASS), Satellite Altimetry (Jason-2), DORIS as well as radio-occultation measurements (FORMOSAT#3/COSMIC). All these observations run into a Kalman-filter to compute global ionosphere maps, i.e. VTEC, for the current instant of time and as a forecast for a couple of subsequent days. Mathematically, the global VTEC is set up as a series expansion in terms of two-dimensional basis functions defined as tensor products of trigonometric B-splines for longitude and polynomial B-splines for latitude. Compared to the classical spherical harmonics, B-splines have a localizing character and, therefore, can handle an inhomogeneous data distribution properly. Finally, B-splines enable a so-called multi-resolution-representation (MRR) enabling the combination of global and regional modelling approaches. In addition to the geodetic measurements, Sun observations are pre
Locally adaptive regression filter-based infrared focal plane array non-uniformity correction
NASA Astrophysics Data System (ADS)
Li, Jia; Qin, Hanlin; Yan, Xiang; Huang, He; Zhao, Yingjuan; Zhou, Huixin
2015-10-01
Due to the limitations of the manufacturing technology, the response rates to the same infrared radiation intensity in each infrared detector unit are not identical. As a result, the non-uniformity of infrared focal plane array, also known as fixed pattern noise (FPN), is generated. To solve this problem, correcting the non-uniformity in infrared image is a promising approach, and many non-uniformity correction (NUC) methods have been proposed. However, they have some defects such as slow convergence, ghosting and scene degradation. To overcome these defects, a novel non-uniformity correction method based on locally adaptive regression filter is proposed. First, locally adaptive regression method is used to separate the infrared image into base layer containing main scene information and the detail layer containing detailed scene with FPN. Then, the detail layer sequence is filtered by non-linear temporal filter to obtain the non-uniformity. Finally, the high quality infrared image is obtained by subtracting non-uniformity component from original image. The experimental results show that the proposed method can significantly eliminate the ghosting and the scene degradation. The results of correction are superior to the THPF-NUC and NN-NUC in the aspects of subjective visual and objective evaluation index.
Spline screw payload fastening system
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
1993-01-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the
Spline screw payload fastening system
NASA Astrophysics Data System (ADS)
Vranish, John M.
1992-09-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the
Spline screw payload fastening system
NASA Astrophysics Data System (ADS)
Vranish, John M.
1993-09-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the
An adaptive online learning approach for Support Vector Regression: Online-SVR-FID
NASA Astrophysics Data System (ADS)
Liu, Jie; Zio, Enrico
2016-08-01
Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.
NASA Astrophysics Data System (ADS)
Chelariu, Romeu; Suditu, Gabriel Dan; Mareci, Daniel; Bolat, Georgiana; Cimpoesu, Nicanor; Leon, Florin; Curteanu, Silvia
2015-04-01
The aim of this study is to investigate the electrochemical behavior of some dental metallic materials in artificial saliva for different pH (5.6 and 3.4), NaF content (500 ppm, 1000 ppm, and 2000 ppm), and with albumin protein addition (0.6 wt.%) for pH 3.4. The corrosion resistance of the alloys was quantitatively evaluated by polarization resistance, estimated by electrochemical impedance spectroscopy method. An adaptive k-nearest-neighbor regression method was applied for evaluating the corrosion resistance of the alloys by simulation, depending on the operation conditions. The predictions provided by the model are useful for experimental practice, as they can replace or, at least, help to plan the experiments. The accurate results obtained prove that the developed model is reliable and efficient.
Mathematical research on spline functions
NASA Technical Reports Server (NTRS)
Horner, J. M.
1973-01-01
One approach in spline functions is to grossly estimate the integrand in J and exactly solve the resulting problem. If the integrand in J is approximated by Y" squared, the resulting problem lends itself to exact solution, the familiar cubic spline. Another approach is to investigate various approximations to the integrand in J and attempt to solve the resulting problems. The results are described.
Technology Transfer Automated Retrieval System (TEKTRAN)
Prediction equations of energy expenditure (EE) using accelerometers and miniaturized heart rate (HR) monitors have been developed in older children and adults but not in preschool-aged children. Because the relationships between accelerometer counts (ACs), HR, and EE are confounded by growth and ma...
Theory, computation, and application of exponential splines
NASA Technical Reports Server (NTRS)
Mccartin, B. J.
1981-01-01
A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.
Polynomial order selection in random regression models via penalizing adaptively the likelihood.
Corrales, J D; Munilla, S; Cantet, R J C
2015-08-01
Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates.
NASA Astrophysics Data System (ADS)
Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong
2015-05-01
A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
Examination of the Circle Spline Routine
NASA Technical Reports Server (NTRS)
Dolin, R. M.; Jaeger, D. L.
1985-01-01
The Circle Spline routine is currently being used for generating both two and three dimensional spline curves. It was developed for use in ESCHER, a mesh generating routine written to provide a computationally simple and efficient method for building meshes along curved surfaces. Circle Spline is a parametric linear blending spline. Because many computerized machining operations involve circular shapes, the Circle Spline is well suited for both the design and manufacturing processes and shows promise as an alternative to the spline methods currently supported by the Initial Graphics Specification (IGES).
Spline interpolation on unbounded domains
NASA Astrophysics Data System (ADS)
Skeel, Robert D.
2016-06-01
Spline interpolation is a splendid tool for multiscale approximation on unbounded domains. In particular, it is well suited for use by the multilevel summation method (MSM) for calculating a sum of pairwise interactions for a large set of particles in linear time. Outlined here is an algorithm for spline interpolation on unbounded domains that is efficient and elegant though not so simple. Further gains in efficiency are possible via quasi-interpolation, which compromises collocation but with minimal loss of accuracy. The MSM, which may also be of value for continuum models, embodies most of the best features of both hierarchical clustering methods (tree methods, fast multipole methods, hierarchical matrix methods) and FFT-based 2-level methods (particle-particle particle-mesh methods, particle-mesh Ewald methods).
A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression
Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin
2012-01-01
To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.
Spline screw multiple rotations mechanism
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
1993-01-01
A system for coupling two bodies together and for transmitting torque from one body to another with mechanical timing and sequencing is reported. The mechanical timing and sequencing is handled so that the following criteria are met: (1) the bodies are handled in a safe manner and nothing floats loose in space, (2) electrical connectors are engaged as long as possible so that the internal processes can be monitored throughout by sensors, and (3) electrical and mechanical power and signals are coupled. The first body has a splined driver for providing the input torque. The second body has a threaded drive member capable of rotation and limited translation. The embedded drive member will mate with and fasten to the splined driver. The second body has an embedded bevel gear member capable of rotation and limited translation. This bevel gear member is coaxial with the threaded drive member. A compression spring provides a preload on the rotating threaded member, and a thrust bearing is used for limiting the translation of the bevel gear member so that when the bevel gear member reaches the upward limit of its translation the two bodies are fully coupled and the bevel gear member then rotates due to the input torque transmitted from the splined driver through the threaded drive member to the bevel gear member. An output bevel gear with an attached output drive shaft is embedded in the second body and meshes with the threaded rotating bevel gear member to transmit the input torque to the output drive shaft.
Aickin, Mikel
2009-05-29
Dynamic allocation of participants to treatments in a clinical trial has been an alternative to randomization for nearly 35 years. Design-adaptive allocation is a particularly flexible kind of dynamic allocation. Every investigation of dynamic allocation methods has shown that they improve balance of prognostic factors across treatment groups, but there have been lingering doubts about their influence on the validity of statistical inferences. Here we report the results of a simulation study focused on this and similar issues. Overall, it is found that there are no statistical reasons, in the situations studied, to prefer randomization to design-adaptive allocation. Specifically, there is no evidence of bias, the number of participants wasted by randomization in small studies is not trivial, and when the aim is to place bounds on the prediction of population benefits, randomization is quite substantially less efficient than design-adaptive allocation. A new, adjusted permutation estimate of the standard deviation of the regression estimator under design-adaptive allocation is shown to be an unbiased estimate of the true sampling standard deviation, resolving a long-standing problem with dynamic allocations. These results are shown in situations with varying numbers of balancing factors, different treatment and covariate effects, different covariate distributions, and in the presence of a small number of outliers.
Spline methods for conversation equations
Bottcher, C.; Strayer, M.R.
1991-01-01
The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs.
Borowsky, Richard
2013-01-01
The forces driving the evolutionary loss or simplification of traits such as vision and pigmentation in cave animals are still debated. Three alternative hypotheses are direct selection against the trait, genetic drift, and indirect selection due to antagonistic pleiotropy. Recent work establishes that Astyanax cavefish exhibit vibration attraction behavior (VAB), a presumed behavioral adaptation to finding food in the dark not exhibited by surface fish. Genetic analysis revealed two regions in the genome with quantitative trait loci (QTL) for both VAB and eye size. These observations were interpreted as genetic evidence that selection for VAB indirectly drove eye regression through antagonistic pleiotropy and, further, that this is a general mechanism to account for regressive evolution. These conclusions are unsupported by the data; the analysis fails to establish pleiotropy and ignores the numerous other QTL that map to, and potentially interact, in the same regions. It is likely that all three forces drive evolutionary change. We will be able to distinguish among them in individual cases only when we have identified the causative alleles and characterized their effects. PMID:23844714
High-frequency health data and spline functions.
Martín-Rodríguez, Gloria; Murillo-Fort, Carlos
2005-03-30
Seasonal variations are highly relevant for health service organization. In general, short run movements of medical magnitudes are important features for managers in this field to make adequate decisions. Thus, the analysis of the seasonal pattern in high-frequency health data is an appealing task. The aim of this paper is to propose procedures that allow the analysis of the seasonal component in this kind of data by means of spline functions embedded into a structural model. In the proposed method, useful adaptions of the traditional spline formulation are developed, and the resulting procedures are capable of capturing periodic variations, whether deterministic or stochastic, in a parsimonious way. Finally, these methodological tools are applied to a series of daily emergency service demand in order to capture simultaneous seasonal variations in which periods are different.
Sartori, Massimo; Reggiani, Monica; van den Bogert, Antonie J.; Lloyd, David G.
2011-01-01
We present a robust and computationally inexpensive method to estimate the lengths and three-dimensional moment arms for a large number of musculotendon actuators of the human lower limb. Using a musculoskeletal model of the lower extremity, a set of values was established for the length of each musculotendon actuator for different lower limb generalized coordinates (joint angles). A multidimensional spline function was then used to fit these data. Muscle moment arms were obtained by differentiating the musculotendon length spline function with respect to the generalized coordinate of interest. This new method was then compared to a previously used polynomial regression method. Compared to the polynomial regression method, the multidimensional spline method produced lower errors for estimating musculotendon lengths and moment arms throughout the whole generalized coordinate workspace. The fitting accuracy was also less affected by the number of dependent degrees of freedom and by the amount of experimental data available. The spline method only requires information on musculotendon lengths to estimate both musculotendon lengths and moment arms, thus relaxing data input requirements, whereas the polynomial regression requires different equations to be used for both musculotendon lengths and moment arms. Finally, we used the spline method in conjunction with an electromyography driven musculoskeletal model to estimate muscle forces under different contractile conditions, which showed the method is suitable for the integration into large scale neuromusculoskeletal models. PMID:22176708
Flexible coiled spline securely joins mating cylinders
NASA Technical Reports Server (NTRS)
Coppernol, R. W.
1966-01-01
Mating cylindrical members are joined by spline to form an integral structure. The spline is made of tightly coiled, high tensile-strength steel spiral wire that fits a groove between the mating members. It provides a continuous bearing surface for axial thrust between the members.
Radial spline assembly for antifriction bearings
NASA Technical Reports Server (NTRS)
Moore, Jerry H. (Inventor)
1993-01-01
An outer race carrier is constructed for receiving an outer race of an antifriction bearing assembly. The carrier in turn is slidably fitted in an opening of a support wall to accommodate slight axial movements of a shaft. A plurality of longitudinal splines on the carrier are disposed to be fitted into matching slots in the opening. A deadband gap is provided between sides of the splines and slots, with a radial gap at ends of the splines and slots and a gap between the splines and slots sized larger than the deadband gap. With this construction, operational distortions (slope) of the support wall are accommodated by the larger radial gaps while the deadband gaps maintain a relatively high springrate of the housing. Additionally, side loads applied to the shaft are distributed between sides of the splines and slots, distributing such loads over a larger surface area than a race carrier of the prior art.
Spline-based procedures for dose-finding studies with active control
Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim
2015-01-01
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Semiparametric regression during 2003–2007*
Ruppert, David; Wand, M.P.; Carroll, Raymond J.
2010-01-01
Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800
Material approximation of data smoothing and spline curves inspired by slime mould.
Jones, Jeff; Adamatzky, Andrew
2014-09-01
The giant single-celled slime mould Physarum polycephalum is known to approximate a number of network problems via growth and adaptation of its protoplasmic transport network and can serve as an inspiration towards unconventional, material-based computation. In Physarum, predictable morphological adaptation is prevented by its adhesion to the underlying substrate. We investigate what possible computations could be achieved if these limitations were removed and the organism was free to completely adapt its morphology in response to changing stimuli. Using a particle model of Physarum displaying emergent morphological adaptation behaviour, we demonstrate how a minimal approach to collective material computation may be used to transform and summarise properties of spatially represented datasets. We find that the virtual material relaxes more strongly to high-frequency changes in data, which can be used for the smoothing (or filtering) of data by approximating moving average and low-pass filters in 1D datasets. The relaxation and minimisation properties of the model enable the spatial computation of B-spline curves (approximating splines) in 2D datasets. Both clamped and unclamped spline curves of open and closed shapes can be represented, and the degree of spline curvature corresponds to the relaxation time of the material. The material computation of spline curves also includes novel quasi-mechanical properties, including unwinding of the shape between control points and a preferential adhesion to longer, straighter paths. Interpolating splines could not directly be approximated due to the formation and evolution of Steiner points at narrow vertices, but were approximated after rectilinear pre-processing of the source data. This pre-processing was further simplified by transforming the original data to contain the material inside the polyline. These exemplary results expand the repertoire of spatially represented unconventional computing devices by demonstrating a
Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year
NASA Astrophysics Data System (ADS)
Kamaruddin, Halim Shukri; Ismail, Noriszura
2014-06-01
Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.
General spline filters for discontinuous Galerkin solutions
Peters, Jörg
2015-01-01
The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots. PMID:26594090
Spline-Screw Multiple-Rotation Mechanism
NASA Technical Reports Server (NTRS)
Vranish, John M.
1994-01-01
Mechanism functions like combined robotic gripper and nut runner. Spline-screw multiple-rotation mechanism related to spline-screw payload-fastening system described in (GSC-13454). Incorporated as subsystem in alternative version of system. Mechanism functions like combination of robotic gripper and nut runner; provides both secure grip and rotary actuation of other parts of system. Used in system in which no need to make or break electrical connections to payload during robotic installation or removal of payload. More complicated version needed to make and break electrical connections. Mechanism mounted in payload.
Schwarz and multilevel methods for quadratic spline collocation
Christara, C.C.; Smith, B.
1994-12-31
Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.
Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias
2015-01-01
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
Single authentication: exposing weighted splining artifacts
NASA Astrophysics Data System (ADS)
Ciptasari, Rimba W.
2016-05-01
A common form of manipulation is to combine parts of the image fragment into another different image either to remove or blend the objects. Inspired by this situation, we propose a single authentication technique for detecting traces of weighted average splining technique. In this paper, we assume that image composite could be created by joining two images so that the edge between them is imperceptible. The weighted average technique is constructed from overlapped images so that it is possible to compute the gray level value of points within a transition zone. This approach works on the assumption that although splining process leaves the transition zone smoothly. They may, nevertheless, alter the underlying statistics of an image. In other words, it introduces specific correlation into the image. The proposed idea dealing with identifying these correlations is to generate an original model of both weighting function, left and right functions, as references to their synthetic models. The overall process of the authentication is divided into two main stages, which are pixel predictive coding and weighting function estimation. In the former stage, the set of intensity pairs {Il,Ir} is computed by exploiting pixel extrapolation technique. The least-squares estimation method is then employed to yield the weighted coefficients. We show the efficacy of the proposed scheme on revealing the splining artifacts. We believe that this is the first work that exposes the image splining artifact as evidence of digital tampering.
Adaptivity Assessment of Regional Semi-Parametric VTEC Modeling to Different Data Distributions
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Onur Karslıoǧlu, Mahmut
2014-05-01
Semi-parametric modelling of Vertical Total Electron Content (VTEC) combines parametric and non-parametric models into a single regression model for estimating the parameters and functions from Global Positioning System (GPS) observations. The parametric part is related to the Differential Code Biases (DCBs), which are fixed unknown parameters of the geometry-free linear combination (or the so called ionospheric observable). On the other hand, the non-parametric component is referred to the spatio-temporal distribution of VTEC which is estimated by applying the method of Multivariate Adaptive Regression B-Splines (BMARS). BMARS algorithm builds an adaptive model by using tensor product of univariate B-splines that are derived from the data. The algorithm searches for best fitting B-spline basis functions in a scale by scale strategy, where it starts adding large scale B-splines to the model and adaptively decreases the scale for including smaller scale features through a modified Gram-Schmidt ortho-normalization process. Then, the algorithm is extended to include the receiver DCBs where the estimates of the receiver DCBs and the spatio-temporal VTEC distribution can be obtained together in an adaptive semi-parametric model. In this work, the adaptivity of regional semi-parametric modelling of VTEC based on BMARS is assessed in different ground-station and data distribution scenarios. To evaluate the level of adaptivity the resulting DCBs and VTEC maps from different scenarios are compared not only with each other but also with CODE distributed GIMs and DCB estimates .
Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia
2015-01-01
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801
Oubida, Regis W; Gantulga, Dashzeveg; Zhang, Man; Zhou, Lecong; Bawa, Rajesh; Holliday, Jason A
2015-01-01
Local adaptation to climate in temperate forest trees involves the integration of multiple physiological, morphological, and phenological traits. Latitudinal clines are frequently observed for these traits, but environmental constraints also track longitude and altitude. We combined extensive phenotyping of 12 candidate adaptive traits, multivariate regression trees, quantitative genetics, and a genome-wide panel of SNP markers to better understand the interplay among geography, climate, and adaptation to abiotic factors in Populus trichocarpa. Heritabilities were low to moderate (0.13-0.32) and population differentiation for many traits exceeded the 99th percentile of the genome-wide distribution of FST, suggesting local adaptation. When climate variables were taken as predictors and the 12 traits as response variables in a multivariate regression tree analysis, evapotranspiration (Eref) explained the most variation, with subsequent splits related to mean temperature of the warmest month, frost-free period (FFP), and mean annual precipitation (MAP). These grouping matched relatively well the splits using geographic variables as predictors: the northernmost groups (short FFP and low Eref) had the lowest growth, and lowest cold injury index; the southern British Columbia group (low Eref and intermediate temperatures) had average growth and cold injury index; the group from the coast of California and Oregon (high Eref and FFP) had the highest growth performance and the highest cold injury index; and the southernmost, high-altitude group (with high Eref and low FFP) performed poorly, had high cold injury index, and lower water use efficiency. Taken together, these results suggest variation in both temperature and water availability across the range shape multivariate adaptive traits in poplar. PMID:25870603
Oubida, Regis W.; Gantulga, Dashzeveg; Zhang, Man; Zhou, Lecong; Bawa, Rajesh; Holliday, Jason A.
2015-01-01
Local adaptation to climate in temperate forest trees involves the integration of multiple physiological, morphological, and phenological traits. Latitudinal clines are frequently observed for these traits, but environmental constraints also track longitude and altitude. We combined extensive phenotyping of 12 candidate adaptive traits, multivariate regression trees, quantitative genetics, and a genome-wide panel of SNP markers to better understand the interplay among geography, climate, and adaptation to abiotic factors in Populus trichocarpa. Heritabilities were low to moderate (0.13–0.32) and population differentiation for many traits exceeded the 99th percentile of the genome-wide distribution of FST, suggesting local adaptation. When climate variables were taken as predictors and the 12 traits as response variables in a multivariate regression tree analysis, evapotranspiration (Eref) explained the most variation, with subsequent splits related to mean temperature of the warmest month, frost-free period (FFP), and mean annual precipitation (MAP). These grouping matched relatively well the splits using geographic variables as predictors: the northernmost groups (short FFP and low Eref) had the lowest growth, and lowest cold injury index; the southern British Columbia group (low Eref and intermediate temperatures) had average growth and cold injury index; the group from the coast of California and Oregon (high Eref and FFP) had the highest growth performance and the highest cold injury index; and the southernmost, high-altitude group (with high Eref and low FFP) performed poorly, had high cold injury index, and lower water use efficiency. Taken together, these results suggest variation in both temperature and water availability across the range shape multivariate adaptive traits in poplar. PMID:25870603
Pierson, Jeffery L; Small, Scott R; Rodriguez, Jose A; Kang, Michael N; Glassman, Andrew H
2015-07-01
Design parameters affecting initial mechanical stability of tapered, splined modular titanium stems (TSMTSs) are not well understood. Furthermore, there is considerable variability in contemporary designs. We asked if spline geometry and stem taper angle could be optimized in TSMTS to improve mechanical stability to resist axial subsidence and increase torsional stability. Initial stability was quantified with stems of varied taper angle and spline geometry implanted in a foam model replicating 2cm diaphyseal engagement. Increased taper angle and a broad spline geometry exhibited significantly greater axial stability (+21%-269%) than other design combinations. Neither taper angle nor spline geometry significantly altered initial torsional stability. PMID:25754255
Geha, Makram J.; Keown, Jeffrey F.; Van Vleck, L. Dale
2011-01-01
Milk yield records (305d, 2X, actual milk yield) of 123,639 registered first lactation Holstein cows were used to compare linear regression (y = β0 + β1X + e), quadratic regression, (y = β0 + β1X + β2X2 + e) cubic regression (y = β0 + β1X + β2X2 + β3X3 +e) and fixed factor models, with cubic-spline interpolation models, for estimating the effects of inbreeding on milk yield. Ten animal models, all with herd-year-season of calving as fixed effect, were compared using the Akaike corrected-Information Criterion (AICc). The cubic-spline interpolation model with seven knots had the lowest AICc, whereas for all those labeled as “traditional”, AICc was higher than the best model. Results from fitting inbreeding using a cubic-spline with seven knots were compared to results from fitting inbreeding as a linear covariate or as a fixed factor with seven levels. Estimates of inbreeding effects were not significantly different between the cubic-spline model and the fixed factor model, but were significantly different from the linear regression model. Milk yield decreased significantly at inbreeding levels greater than 9%. Variance component estimates were similar for the three models. Ranking of the top 100 sires with daughter records remained unaffected by the model used. PMID:21931517
Spline-Screw Payload-Fastening System
NASA Technical Reports Server (NTRS)
Vranish, John M.
1994-01-01
Payload handed off securely between robot and vehicle or structure. Spline-screw payload-fastening system includes mating female and male connector mechanisms. Clockwise (or counter-clockwise) rotation of splined male driver on robotic end effector causes connection between robot and payload to tighten (or loosen) and simultaneously causes connection between payload and structure to loosen (or tighten). Includes mechanisms like those described in "Tool-Changing Mechanism for Robot" (GSC-13435) and "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430). Designed for use in outer space, also useful on Earth in applications needed for secure handling and secure mounting of equipment modules during storage, transport, and/or operation. Particularly useful in machine or robotic applications.
Curvilinear bicubic spline fit interpolation scheme
NASA Technical Reports Server (NTRS)
Chi, C.
1973-01-01
Modification of the rectangular bicubic spline fit interpolation scheme so as to make it suitable for use with a polar grid pattern. In the proposed modified scheme the interpolation function is expressed in terms of the radial length and the arc length, and the shape of the patch, which is a wedge or a truncated wedge, is taken into account implicitly. Examples are presented in which the proposed interpolation scheme was used to reproduce the equations of a hemisphere.
The basis spline method and associated techniques
Bottcher, C.; Strayer, M.R.
1989-01-01
We outline the Basis Spline and Collocation methods for the solution of Partial Differential Equations. Particular attention is paid to the theory of errors, and the handling of non-self-adjoint problems which are generated by the collocation method. We discuss applications to Poisson's equation, the Dirac equation, and the calculation of bound and continuum states of atomic and nuclear systems. 12 refs., 6 figs.
Parameter Choices for Approximation by Harmonic Splines
NASA Astrophysics Data System (ADS)
Gutting, Martin
2016-04-01
The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.
On the spline-based wavelet differentiation matrix
NASA Technical Reports Server (NTRS)
Jameson, Leland
1993-01-01
The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.
Yoshizawa, Masato; O'Quin, Kelly E; Jeffery, William R
2013-01-01
Vibration attraction behavior (VAB) is the swimming of fish toward an oscillating object, a behavior that is likely adaptive because it increases foraging efficiency in darkness. VAB is seen in a small proportion of Astyanax surface-dwelling populations (surface fish) but is pronounced in cave-dwelling populations (cavefish). In a recent study, we identified two quantitative trait loci for VAB on Astyanax linkage groups 2 and 17. We also demonstrated that a small population of superficial neuromast sensors located within the eye orbit (EO SN) facilitate VAB, and two quantitative trait loci (QTL) were identified for EO SN that were congruent with those for VAB. Finally, we showed that both VAB and EO SN are negatively correlated with eye size, and that two (of several) QTL for eye size overlap VAB and EO SN QTLs. From these results, we concluded that the adaptive evolution of VAB and EO SN has contributed to the indirect loss of eyes in cavefish, either as a result of pleiotropy or tight physical linkage of the mutations underlying these traits. In a subsequent commentary, Borowsky argues that there is poor experimental support for our conclusions. Specifically, Borowsky states that: (1) linkage groups (LGs) 2 and 17 harbor QTL for many traits and, therefore, no evidence exists for an exclusive interaction among the overlapping VAB, EO SN and eye size QTL; (2) some of the QTL we identified are too broad (>20 cM) to support the hypothesis of correlated evolution due to pleiotropy or hitchhiking; and (3) VAB is unnecessary to explain the indirect evolution of eye-loss since the negative polarity of numerous eye QTL is consistent with direct selection against eyes. Borowsky further argues that (4) it is difficult to envision an evolutionary scenario whereby VAB and EO SN drive eye loss, since the eyes must first be reduced in order to increase the number of EO SN and, therefore, VAB. In this response, we explain why the evidence of one trait influencing eye reduction
Fatigue crack growth monitoring of idealized gearbox spline component using acoustic emission
NASA Astrophysics Data System (ADS)
Zhang, Lu; Ozevin, Didem; Hardman, William; Kessler, Seth; Timmons, Alan
2016-04-01
The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. The acoustic emission (AE) method is a direct way of detecting active flaws; however, the method suffers from the influence of background noise and location/sensor based pattern recognition method. It is important to identify the source mechanism and adapt it to different test conditions and sensors. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method in a laboratory environment. The test sample has the major details of the spline component on a flattened geometry. The AE data is continuously collected together with strain gauges strategically positions on the structure. The fatigue test characteristics are 4 Hz frequency and 0.1 as the ratio of minimum to maximum loading in tensile regime. It is observed that there are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The frequency spectra of continuous emissions and burst emissions are compared to understand the difference of sudden crack growth and gradual crack growth. The predicted crack growth rate is compared with the AE data using the cumulative AE events at the notch tip. The source mechanism of sudden crack growth is obtained solving the inverse mathematical problem from output signal to input signal. The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method The AE data is continuously collected together with strain gauges. There are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The source mechanism of
Quantitative coronary angiography with deformable spline models.
Klein, A K; Lee, F; Amini, A A
1997-10-01
Although current edge-following schemes can be very efficient in determining coronary boundaries, they may fail when the feature to be followed is disconnected (and the scheme is unable to bridge the discontinuity) or branch points exist where the best path to follow is indeterminate. In this paper, we present new deformable spline algorithms for determining vessel boundaries, and enhancing their centerline features. A bank of even and odd S-Gabor filter pairs of different orientations are convolved with vascular images in order to create an external snake energy field. Each filter pair will give maximum response to the segment of vessel having the same orientation as the filters. The resulting responses across filters of different orientations are combined to create an external energy field for snake optimization. Vessels are represented by B-Spline snakes, and are optimized on filter outputs with dynamic programming. The points of minimal constriction and the percent-diameter stenosis are determined from a computed vessel centerline. The system has been statistically validated using fixed stenosis and flexible-tube phantoms. It has also been validated on 20 coronary lesions with two independent operators, and has been tested for interoperator and intraoperator variability and reproducibility. The system has been found to be specially robust in complex images involving vessel branchings and incomplete contrast filling.
Color management with a hammer: the B-spline fitter
NASA Astrophysics Data System (ADS)
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
Multiresolution Analysis of UTAT B-spline Curves
NASA Astrophysics Data System (ADS)
Lamnii, A.; Mraoui, H.; Sbibih, D.; Zidna, A.
2011-09-01
In this paper, we describe a multiresolution curve representation based on periodic uniform tension algebraic trigonometric (UTAT) spline wavelets of class ??? and order four. Then we determine the decomposition and the reconstruction vectors corresponding to UTAT-spline spaces. Finally, we give some applications in order to illustrate the efficiency of the proposed approach.
Convexity preserving C2 rational quadratic trigonometric spline
NASA Astrophysics Data System (ADS)
Dube, Mridula; Tiwari, Preeti
2012-09-01
A C2 rational quadratic trigonometric spline interpolation has been studied using two kind of rational quadratic trigonometric splines. It is shown that under some natural conditions the solution of the problem exits and is unique. The necessary and sufficient condition that constrain the interpolation curves to be convex in the interpolating interval or subinterval are derived.
Spline-locking screw fastening strategy
NASA Technical Reports Server (NTRS)
Vranish, John M.
1992-01-01
A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotics or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced space manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.
Spline-Locking Screw Fastening Strategy (SLSFS)
NASA Technical Reports Server (NTRS)
Vranish, John M.
1991-01-01
A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotic or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
The spline probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Sithiravel, Rajiv; Tharmarasa, Ratnasingham; McDonald, Mike; Pelletier, Michel; Kirubarajan, Thiagalingam
2012-06-01
The Probability Hypothesis Density Filter (PHD) is a multitarget tracker for recursively estimating the number of targets and their state vectors from a set of observations. The PHD filter is capable of working well in scenarios with false alarms and missed detections. Two distinct PHD filter implementations are available in the literature: the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filters. The SMC-PHD filter uses particles to provide target state estimates, which can lead to a high computational load, whereas the GM-PHD filter does not use particles, but restricts to linear Gaussian mixture models. The SMC-PHD filter technique provides only weighted samples at discrete points in the state space instead of a continuous estimate of the probability density function of the system state and thus suffers from the well-known degeneracy problem. This paper proposes a B-Spline based Probability Hypothesis Density (S-PHD) filter, which has the capability to model any arbitrary probability density function. The resulting algorithm can handle linear, non-linear, Gaussian, and non-Gaussian models and the S-PHD filter can also provide continuous estimates of the probability density function of the system state. In addition, by moving the knots dynamically, the S-PHD filter ensures that the splines cover only the region where the probability of the system state is significant, hence the high efficiency of the S-PHD filter is maintained at all times. Also, unlike the SMC-PHD filter, the S-PHD filter is immune to the degeneracy problem due to its continuous nature. The S-PHD filter derivations and simulations are provided in this paper.
A Simple and Fast Spline Filtering Algorithm for Surface Metrology
Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei
2015-01-01
Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement. PMID:26958443
An Examination of New Paradigms for Spline Approximations.
Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A
2006-01-01
Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.
Hill, G.R.
1987-11-10
A power transmission member is described comprising a radially-extending end wall and a cylindrical axially-extending sleeve connected to the end wall and terminating remote from the end wall in an open end. The sleeve has pressure formed internal and external axially-extending splines formed therein by intermeshing of teeth of a mandrel on which the sleeve is mounted and teeth of a pair of racks slidable therepast. The splines terminate short of the open sleeve end in an unsplined cylindrical ring-shaped lip portion which reduced bellmouth of the splines to within about 0.010 inch along their length.
Applications of B-splines in atomic and molecular physics
NASA Astrophysics Data System (ADS)
Bachau, H.; Cormier, E.; Decleva, P.; Hansen, J. E.; Martín, F.
2001-12-01
One of the most significant developments in computational atomic and molecular physics in recent years has been the introduction of B-spline basis sets in calculations of atomic and molecular structure and dynamics. B-splines were introduced in applied mathematics more than 50 years ago, but it has been in the 1990s, with the advent of powerful computers, that the number of applications has grown exponentially. In this review we present the main properties of B-splines and discuss why they are useful to solve different problems in atomic and molecular physics. We provide an extensive reference list of theoretical works that have made use of B-spline basis sets up to 2000. Among these, we have focused on those applications that have led to the discovery of new interesting phenomena and pointed out the reasons behind the success of the approach.
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
Bueno, G.; Ruiz, M.; Sanchez, S
2006-10-04
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
NASA Astrophysics Data System (ADS)
Bueno, G.; Sánchez, S.; Ruiz, M.
2006-10-01
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
ERIC Educational Resources Information Center
Matson, Johnny L.; Kozlowski, Alison M.
2010-01-01
Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…
User's guide for Wilson-Fowler spline software: SPLPKG, WFCMPR, WFAPPX - CADCAM-010
Fletcher, S.K.
1985-02-01
The Wilson-Fowler spline is widely used in computer aided manufacturing, but is not available in all commercial CAD/CAM systems. These three programs provide a capability for generating, comparing, and approximating Wilson-Fowler splines. SPLPKG generates a spline passing through given nodes, and computes a piecewise linear approximation to the spline. WFCMPR computes the difference between two splines with common nodes. WFAPPX computes the difference between a spline and a piecewise linear curve. The programs are in Fortran 77 and are machine independent.
Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.
Kainu, Annette; Timonen, Kirsi
2016-07-01
Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.
Simple spline-function equations for fracture mechanics calculations
NASA Technical Reports Server (NTRS)
Orange, T. W.
1979-01-01
The paper presents simple spline-function equations for fracture mechanics calculations. A spline function is a sequence of piecewise polynomials of degree n greater than 1 whose coefficients are such that the function and its first n-1 derivatives are continuous. Second-degree spline equations are presented for the compact, three point bend, and crack-line wedge-loaded specimens. Some expressions can be used directly, so that for a cyclic crack propagation test using a compact specimen, the equation given allows the cracklength to be calculated from the slope of the load-displacement curve. For an R-curve test, equations allow the crack length and stress intensity factor to be calculated from the displacement and the displacement ratio.
A cubic spline approximation for problems in fluid mechanics
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Graves, R. A., Jr.
1975-01-01
A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.
Bidirectional Elastic Image Registration Using B-Spline Affine Transformation
Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao
2014-01-01
A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210
Huang, Dong; Cabral, Ricardo; De la Torre, Fernando
2016-02-01
Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740
AnL1 smoothing spline algorithm with cross validation
NASA Astrophysics Data System (ADS)
Bosworth, Ken W.; Lall, Upmanu
1993-08-01
We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
How to fly an aircraft with control theory and splines
NASA Technical Reports Server (NTRS)
Karlsson, Anders
1994-01-01
When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.
Fast Simulation of X-ray Projections of Spline-based Surfaces using an Append Buffer
Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca
2012-01-01
Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector, and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640×480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically. Source code is available at http://conrad.stanford.edu/ PMID:22975431
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Cubic spline approximation techniques for parameter estimation in distributed systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Crowley, J. M.; Kunisch, K.
1983-01-01
Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.
Approximation and modeling with ambient B-splines
NASA Astrophysics Data System (ADS)
Lehmann, N.; Maier, L.-B.; Odathuparambil, S.; Reif, U.
2016-06-01
We present a novel technique for solving approximation problems on manifolds in terms of standard tensor product B-splines. This method is easy to implement and provides optimal approximation order. Applications include the representation of smooth surfaces of arbitrary genus.
Radial Splines Would Prevent Rotation Of Bearing Race
NASA Technical Reports Server (NTRS)
Kaplan, Ronald M.; Chokshi, Jaisukhlal V.
1993-01-01
Interlocking fine-pitch ribs and grooves formed on otherwise flat mating end faces of housing and outer race of rolling-element bearing to be mounted in housing, according to proposal. Splines bear large torque loads and impose minimal distortion on raceway.
Abbas, A A; Guo, X; Tan, W H; Jalab, H A
2014-08-01
In a computerized image analysis environment, the irregularity of a lesion border has been used to differentiate between malignant melanoma and other pigmented skin lesions. The accuracy of the automated lesion border detection is a significant step towards accurate classification at a later stage. In this paper, we propose the use of a combined Spline and B-spline in order to enhance the quality of dermoscopic images before segmentation. In this paper, morphological operations and median filter were used first to remove noise from the original image during pre-processing. Then we proceeded to adjust image RGB values to the optimal color channel (green channel). The combined Spline and B-spline method was subsequently adopted to enhance the image before segmentation. The lesion segmentation was completed based on threshold value empirically obtained using the optimal color channel. Finally, morphological operations were utilized to merge the smaller regions with the main lesion region. Improvement on the average segmentation accuracy was observed in the experimental results conducted on 70 dermoscopic images. The average accuracy of segmentation achieved in this paper was 97.21 % (where, the average sensitivity and specificity were 94 % and 98.05 % respectively).
A spline-based tool to assess and visualize the calibration of multiclass risk predictions.
Van Hoorde, K; Van Huffel, S; Timmerman, D; Bourne, T; Van Calster, B
2015-04-01
When validating risk models (or probabilistic classifiers), calibration is often overlooked. Calibration refers to the reliability of the predicted risks, i.e. whether the predicted risks correspond to observed probabilities. In medical applications this is important because treatment decisions often rely on the estimated risk of disease. The aim of this paper is to present generic tools to assess the calibration of multiclass risk models. We describe a calibration framework based on a vector spline multinomial logistic regression model. This framework can be used to generate calibration plots and calculate the estimated calibration index (ECI) to quantify lack of calibration. We illustrate these tools in relation to risk models used to characterize ovarian tumors. The outcome of the study is the surgical stage of the tumor when relevant and the final histological outcome, which is divided into five classes: benign, borderline malignant, stage I, stage II-IV, and secondary metastatic cancer. The 5909 patients included in the study are randomly split into equally large training and test sets. We developed and tested models using the following algorithms: logistic regression, support vector machines, k nearest neighbors, random forest, naive Bayes and nearest shrunken centroids. Multiclass calibration plots are interesting as an approach to visualizing the reliability of predicted risks. The ECI is a convenient tool for comparing models, but is less informative and interpretable than calibration plots. In our case study, logistic regression and random forest showed the highest degree of calibration, and the naive Bayes the lowest.
Akima splines for minimization of breathing interference in aortic rheography data
NASA Astrophysics Data System (ADS)
Tsoy, Maria O.; Stiukhina, Elena S.; Klochkov, Victor A.; Postnov, Dmitry E.
2015-03-01
The elimination of low-frequency noise of breath and motion artifacts is one of the most difficult challenges of preprocessing rheographic signal. The data filtering is the conventional way to separate useful signal from noise and interferences. Conventionally, linear filtering is used to easy design and implementation. However, in some cases such techniques are difficult, if possible, to apply, since the data frequency range is overlapped with one of interferences. Specifically, it happens in aortic rheography, where some breathing process and pulmonary blood flow contributions are unavoidable. We suggest an alternative approach for breathing interference reduction, based on adaptive reconstruction of baseline deviation. Specifically, the computational scheme based on multiple calculation of Akima splines is suggested, implemented using C# language and validated using surrogate data. The applications of proposed technique to the real data processing deliver the better quality of aortic valve opening detection.
High-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1975-01-01
The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.
Tensorial Basis Spline Collocation Method for Poisson's Equation
NASA Astrophysics Data System (ADS)
Plagne, Laurent; Berthou, Jean-Yves
2000-01-01
This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.
Data approximation using a blending type spline construction
Dalmo, Rune; Bratlie, Jostein
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.
Control theory and splines, applied to signature storage
NASA Technical Reports Server (NTRS)
Enqvist, Per
1994-01-01
In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.
Explicit B-spline regularization in diffeomorphic image registration
Tustison, Nicholas J.; Avants, Brian B.
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140
Explicit B-spline regularization in diffeomorphic image registration.
Tustison, Nicholas J; Avants, Brian B
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools.
Leffondré, Karen; Jager, Kitty J; Boucquemont, Julie; Stel, Vianda S; Heinze, Georg
2014-10-01
Regression models are being used to quantify the effect of an exposure on an outcome, while adjusting for potential confounders. While the type of regression model to be used is determined by the nature of the outcome variable, e.g. linear regression has to be applied for continuous outcome variables, all regression models can handle any kind of exposure variables. However, some fundamentals of representation of the exposure in a regression model and also some potential pitfalls have to be kept in mind in order to obtain meaningful interpretation of results. The objective of this educational paper was to illustrate these fundamentals and pitfalls, using various multiple regression models applied to data from a hypothetical cohort of 3000 patients with chronic kidney disease. In particular, we illustrate how to represent different types of exposure variables (binary, categorical with two or more categories and continuous), and how to interpret the regression coefficients in linear, logistic and Cox models. We also discuss the linearity assumption in these models, and show how wrongly assuming linearity may produce biased results and how flexible modelling using spline functions may provide better estimates.
Numerical solution of differential-algebraic equations using the spline collocation-variation method
NASA Astrophysics Data System (ADS)
Bulatov, M. V.; Rakhvalov, N. P.; Solovarova, L. S.
2013-03-01
Numerical methods for solving initial value problems for differential-algebraic equations are proposed. The approximate solution is represented as a continuous vector spline whose coefficients are found using the collocation conditions stated for a subgrid with the number of collocation points less than the degree of the spline and the minimality condition for the norm of this spline in the corresponding spaces. Numerical results for some model problems are presented.
Flexible regression models over river networks
O’Donnell, David; Rushworth, Alastair; Bowman, Adrian W; Marian Scott, E; Hallard, Mark
2014-01-01
Many statistical models are available for spatial data but the vast majority of these assume that spatial separation can be measured by Euclidean distance. Data which are collected over river networks constitute a notable and commonly occurring exception, where distance must be measured along complex paths and, in addition, account must be taken of the relative flows of water into and out of confluences. Suitable models for this type of data have been constructed based on covariance functions. The aim of the paper is to place the focus on underlying spatial trends by adopting a regression formulation and using methods which allow smooth but flexible patterns. Specifically, kernel methods and penalized splines are investigated, with the latter proving more suitable from both computational and modelling perspectives. In addition to their use in a purely spatial setting, penalized splines also offer a convenient route to the construction of spatiotemporal models, where data are available over time as well as over space. Models which include main effects and spatiotemporal interactions, as well as seasonal terms and interactions, are constructed for data on nitrate pollution in the River Tweed. The results give valuable insight into the changes in water quality in both space and time. PMID:25653460
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
Ren, K
1990-07-01
A new numerical method of determining potentiometric titration end-points is presented. It consists in calculating the coefficients of approximative spline functions describing the experimental data (e.m.f., volume of titrant added). The end-point (the inflection point of the curve) is determined by calculating zero points of the second derivative of the approximative spline function. This spline function, unlike rational spline functions, is free from oscillations and its course is largely independent of random errors in e.m.f. measurements. The proposed method is useful for direct analysis of titration data and especially as a basis for construction of microcomputer-controlled automatic titrators. PMID:18964999
ERIC Educational Resources Information Center
Pedrini, D. T.; Pedrini, Bonnie C.
Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…
Regressive Evolution in Astyanax Cavefish
Jeffery, William R.
2013-01-01
A diverse group of animals, including members of most major phyla, have adapted to life in the perpetual darkness of caves. These animals are united by the convergence of two regressive phenotypes, loss of eyes and pigmentation. The mechanisms of regressive evolution are poorly understood. The teleost Astyanax mexicanus is of special significance in studies of regressive evolution in cave animals. This species includes an ancestral surface dwelling form and many con-specific cave-dwelling forms, some of which have evolved their recessive phenotypes independently. Recent advances in Astyanax development and genetics have provided new information about how eyes and pigment are lost during cavefish evolution; namely, they have revealed some of the molecular and cellular mechanisms involved in trait modification, the number and identity of the underlying genes and mutations, the molecular basis of parallel evolution, and the evolutionary forces driving adaptation to the cave environment. PMID:19640230
Use of tensor product splines in magnet optimization
Davey, K.R. )
1999-05-01
Variational Metrics and other direct search techniques have proved useful in magnetic optimization. At least one technique used in magnetic optimization is to first fit the data of the desired optimization parameter to the data. If this fit is smoothly differentiable, a number of powerful techniques become available for the optimization. The author shows the usefulness of tensor product splines in accomplishing this end. Proper choice of augmented knot placement not only makes the fit very accurate, but allows for differentiation. Thus the gradients required with direct optimization in divariate and trivariate applications are robustly generated.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyu; Li, Qingbo; Zhang, Guangjun
2013-11-01
In this paper, a modified single-index signal regression (mSISR) method is proposed to construct a nonlinear and practical model with high-accuracy. The mSISR method defines the optimal penalty tuning parameter in P-spline signal regression (PSR) as initial tuning parameter and chooses the number of cycles based on minimizing root mean squared error of cross-validation (RMSECV). mSISR is superior to single-index signal regression (SISR) in terms of accuracy, computation time and convergency. And it can provide the character of the non-linearity between spectra and responses in a more precise manner than SISR. Two spectra data sets from basic research experiments, including plant chlorophyll nondestructive measurement and human blood glucose noninvasive measurement, are employed to illustrate the advantages of mSISR. The results indicate that the mSISR method (i) obtains the smooth and helpful regression coefficient vector, (ii) explicitly exhibits the type and amount of the non-linearity, (iii) can take advantage of nonlinear features of the signals to improve prediction performance and (iv) has distinct adaptability for the complex spectra model by comparing with other calibration methods. It is validated that mSISR is a promising nonlinear modeling strategy for multivariate calibration.
Systolic algorithms for B-spline patch generation
Megson, G.M. )
1991-03-01
This paper describes a systolic array for constructing the blending functions of B-spline curves and surfaces to be 7k times faster than the equivalent sequential computation. The array requires just 5k inner product cell equivalents, where k - 1 is the maximum degree of the blending function polynomials. This array is then used as a basis for a composite systolic architecture for generating single or multiple points on a B-spline curve or surface. The total hardware requirement is bounded by 5 max (k, l) + 3 (max(m,n) + 1) inner product cells and O(mn) registers, where m and n are the numbers of control points in the two available directions. The hardware can be reduced to 5 max(k, l) + max(m,n) + 1 if each component of a point is generated by separate passes of data through the array. Equations for the array speed-up are given and likely speed-ups for different sized patches considered.
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
B-spline calculations of oscillator strengths in noble gases.
NASA Astrophysics Data System (ADS)
Zatsarinny, Oleg; Bartschat, Klaus
2006-05-01
The B-spline box-based close-coupling method [1] was applied for extensive calculations of the transition probabilities in the noble gases Ne, Ar, Kr and Xe for energy levels up to n = 12. An individually optimized, term-dependent set of non-orthogonal one-electron radial functions was used to account for the strong term dependence in the valence orbitals. The core-valence correlation was introduced through multi-channel expansions, which include the ns^2np^5, nsnp^6 and ns^2np^4(n+1)l target states. The inner-core correlation was accounted for by employing multi-configuration target states. Energy levels and oscillator strengths for transitions from the np^6 ground-state configuration as well as transitions between excited states were computed in the Breit-Pauli approximation. The inner-core correlation was found to be very important for most of the transitions considered. The good agreement with the available experimental data shows that the B-spline method can be used for accurate calculations of oscillator strengths for states with intermediate n-values, i.e. exactly the region where it is difficult to apply standard MCHF methods. At the same time the accuracy for the low-lying states is close to the accuracy obtained in large-scale MCHF calculations [2]. [1] O. Zatsarinny and C. Froese Fischer, J. Phys. B 35, 4669 (2002). [2] A. Irimia and C. Froese Fischer, J. Phys. B 37, 1659 (2004).
n-dimensional non uniform rational b-splines for metamodeling
Turner, Cameron J; Crawford, Richard H
2008-01-01
Non Uniform Rational B-splines (NURBs) have unique properties that make them attractive for engineering metamodeling applications. NURBs are known to accurately model many different continuous curve and surface topologies in 1- and 2-variate spaces. However, engineering metamodels of the design space often require hypervariate representations of multidimensional outputs. In essence, design space metamodels are hyperdimensional constructs with a dimensionality determined by their input and output variables. To use NURBs as the basis for a metamodel in a hyperdimensional space, traditional geometric fitting techniques must be adapted to hypervariate and hyperdimensional spaces composed of both continuous and discontinuous variable types. In this paper, they describe the necessary adaptations for the development of a NURBs-based metamodel called a Hyperdimensional Performance Model or HyPerModel. HyPerModels are capable of accurately and reliably modeling nonlinear hyperdimensional objects defined by both continuous and discontinuous variables of a wide variety of topologies, such as those that define typical engineering design spaces. They demonstrate this ability by successfully generating accurate HyPerModels of 10 trial functions laying the foundation for future work with N-dimensional NURBs in design space applications.
A Spline Approximating Algorithm for the Rezoning (remapping)of Arbitrary Meshes
NASA Astrophysics Data System (ADS)
Wang, Ruili
2001-06-01
Traditionally, numerical simulation fluid dynamics has taken the form of Lagrangian or Eulerian methods. Lagrangian methods, in which the computational mesh travels with the fluid, are ideal for the many problems which involve interfaces between materials or free surfaces. However, multidimensional Lagrangian calculations can typically be carried out for only a limited time before severs mesh distortion, or even mesh tangling, destroys the calculation. Eulerian methods, in which the mesh is fixed, are ideal for flows with large deformation but the sharp resolution of interfaces or free surfaces is lost. The any method in computational fluid dynamics requires the periodic remapping of conserved quantities such as mass, momentum, and energy from one old, distorted mesh to some other arbitrarily defined mesh. This procedure is a type of interpolation which is usually constrained to be conservative and monotone. The report presents an types of remapping algorithms using spline approximating methods for numerical simulation codes using a unstructured or adaptive mesh. The approach adapted to not only structure mesh but also unstructure mesh. It is effective that the techniques can the more accurate ensure cell physics quantity distribution, that the approach is simple and nothing the matter gives the procedures.
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
The approximation of the parameters of ASE motion by the spline-functions.
NASA Astrophysics Data System (ADS)
Tamarov, V. A.; Serebrennikov, A. G.
1984-11-01
Cubical and rational splines have been applied for the approximation of the rectangular coordinates and velocities of the Earth artificial satellites. The results were discussed from the point of view of operating speed and the amount of numerical information determining spline.
NASA Astrophysics Data System (ADS)
Mitra, Jhimli; Marti, Robert; Oliver, Arnau; Llado, Xavier; Vilanova, Joan C.; Meriaudeau, Fabrice
2011-03-01
This paper provides a comparison of spline-based registration methods applied to register interventional Trans Rectal Ultrasound (TRUS) and pre-acquired Magnetic Resonance (MR) prostate images for needle guided prostate biopsy. B-splines and Thin-plate Splines (TPS) are the most prevalent spline-based approaches to achieve deformable registration. Pertaining to the strategic selection of correspondences for the TPS registration, we use an automatic method already proposed in our previous work to generate correspondences in the MR and US prostate images. The method exploits the prostate geometry with the principal components of the segmented prostate as the underlying framework and involves a triangulation approach. The correspondences are generated with successive refinements and Normalized Mutual Information (NMI) is employed to determine the optimal number of correspondences required to achieve TPS registration. B-spline registration with successive grid refinements are consecutively applied for a significant comparison of the impact of the strategically chosen correspondences on the TPS registration against the uniform B-spline control grids. The experimental results are validated on 4 patient datasets. Dice Similarity Coefficient (DSC) is used as a measure of the registration accuracy. Average DSC values of 0.97+/-0.01 and 0.95+/-0.03 are achieved for the TPS and B-spline registrations respectively. B-spline registration is observed to be more computationally expensive than the TPS registration with average execution times of 128.09 +/- 21.7 seconds and 62.83 +/- 32.77 seconds respectively for images with maximum width of 264 pixels and a maximum height of 211 pixels.
Samuels, Marina A; Reed, Matthew P; Arbogast, Kristy B; Seacrist, Thomas
2016-01-01
Designing motor vehicle safety systems requires knowledge of whole body kinematics during dynamic loading for occupants of varying size and age, often obtained from sled tests with postmortem human subjects and human volunteers. Recently, we reported pediatric and adult responses in low-speed (<4 g) automotive-like impacts, noting reductions in maximum excursion with increasing age. Since the time-based trajectory shape is also relevant for restraint design, this study quantified the time-series trajectories using basis splines and developed a statistical model for predicting trajectories as a function of body dimension or age. Previously collected trajectories of the head, spine, and pelvis were modeled using cubic basis splines with eight control points. A principal component analysis was conducted on the control points and related to erect seated height using a linear regression model. The resulting statistical model quantified how trajectories became shorter and flatter with increasing body size, corresponding to the validation data-set. Trajectories were then predicted for erect seated heights corresponding to pediatric and adult anthropomorphic test devices (ATDs), thus generating performance criteria for the ATDs based on human response. This statistical model can be used to predict trajectories for a subject of specified anthropometry and utilized in subject-specific computational models of occupant response.
Existence and Construction of Simple B-Splines of Class Ck on a Four-Directional Mesh of the Plane
NASA Astrophysics Data System (ADS)
Nouisser, O.; Sbibih, D.
2001-08-01
In this paper we present a study of spaces of splines in Ck(R2) with supports the square S1 and the lozenge ?1 formed respectively by four and eight triangles of the uniform four directional mesh of the plane. Such splines are called S1 and ?1-splines. We first compute the dimension of the space of S1-splines. Then we prove the existence of a unique S1-spline of minimal degree for any fixed kD0. By using this last result, we also prove the existence of a unique S1-spline of minimal degree. Finally, we describe algorithms allowing to compute the Bernstein-Bézier coefficients of S1-spline and ?1-spline of minimal degree.
Interpolation by new B-splines on a four directional mesh of the plane
NASA Astrophysics Data System (ADS)
Nouisser, O.; Sbibih, D.
2004-01-01
In this paper we construct new simple and composed B-splines on the uniform four directional mesh of the plane, in order to improve the approximation order of B-splines studied in Sablonniere (in: Program on Spline Functions and the Theory of Wavelets, Proceedings and Lecture Notes, Vol. 17, University of Montreal, 1998, pp. 67-78). If φ is such a simple B-spline, we first determine the space of polynomials with maximal total degree included in , and we prove some results concerning the linear independence of the family . Next, we show that the cardinal interpolation with φ is correct and we study in S(φ) a Lagrange interpolation problem. Finally, we define composed B-splines by repeated convolution of φ with the characteristic functions of a square or a lozenge, and we give some of their properties.
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.; Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.
2008-07-14
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensity that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
NASA Astrophysics Data System (ADS)
Curà, Francesca; Mura, Andrea
2013-11-01
Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.
Sensorless Interaction Force Control Based on B-Spline Function for Human-Robot Systems
NASA Astrophysics Data System (ADS)
Mitsantisuk, Chowarit; Katsura, Seiichiro; Ohishi, Kiyoshi
In this paper, to provide precise force sensation of human operator, a twin direct-drive motor system with wire rope mechanism has been developed. The human-robot interaction force and the wire rope tension are independently controlled in acceleration dimension by realizing the dual disturbance observer based on modal space design. In the common mode, it is utilized for control of vibration suppression and wire rope tension. In the differential mode, the purity of human external force with compensation of friction force is obtained. This mode is useful for control of the interaction force of human. Furthermore, the human-robot system that has the ability of support of human interaction force is also proposed. The interaction force generation based on B-spline function is applied to automatically adjust the smooth force command corresponding to the adaptive parameters.
To analyze the human movement stroke, the multi-sensor scheme is applied to fuse both two motor encoders and acceleration sensor signal by using Kalman filter. From the experimental results, the ability to design different level of assistive force makes it well suited to customized training programs due to time and human movement constraints.
Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report
NASA Technical Reports Server (NTRS)
1978-01-01
Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.
Bicubic B-spline interpolation method for two-dimensional heat equation
NASA Astrophysics Data System (ADS)
Hamid, Nur Nadiah Abd.; Majid, Ahmad Abd.; Ismail, Ahmad Izani Md.
2015-10-01
Two-dimensional heat equation was solved using bicubic B-spline interpolation method. An arbitrary surface equation was generated by bicubic B-spline equation. This equation was incorporated in the heat equation after discretizing the time using finite difference method. An under-determined system of linear equation was obtained and solved to obtain the approximate analytical solution for the problem. This method was tested on one example.
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
Tharrington, Arnold N.
2015-09-09
The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.
Registration of sliding objects using direction dependent B-splines decomposition
NASA Astrophysics Data System (ADS)
Delmon, V.; Rit, S.; Pinho, R.; Sarrut, D.
2013-03-01
Sliding motion is a challenge for deformable image registration because it leads to discontinuities in the sought deformation. In this paper, we present a method to handle sliding motion using multiple B-spline transforms. The proposed method decomposes the sought deformation into sliding regions to allow discontinuities at their interfaces, but prevents unrealistic solutions by forcing those interfaces to match. The method was evaluated on 16 lung cancer patients against a single B-spline transform approach and a multi B-spline transforms approach without the sliding constraint at the interface. The target registration error (TRE) was significantly lower with the proposed method (TRE = 1.5 mm) than with the single B-spline approach (TRE = 3.7 mm) and was comparable to the multi B-spline approach without the sliding constraint (TRE = 1.4 mm). The proposed method was also more accurate along region interfaces, with 37% less gaps and overlaps when compared to the multi B-spline transforms without the sliding constraint. This work was presented in part at the 4th International Workshop on Pulmonary Image Analysis during the Medical Image Computing and Computer Assisted Intervention (MICCAI) in Toronto, Canada (2011).
Penalized Spline: a General Robust Trajectory Model for ZIYUAN-3 Satellite
NASA Astrophysics Data System (ADS)
Pan, H.; Zou, Z.
2016-06-01
Owing to the dynamic imaging system, the trajectory model plays a very important role in the geometric processing of high resolution satellite imagery. However, establishing a trajectory model is difficult when only discrete and noisy data are available. In this manuscript, we proposed a general robust trajectory model, the penalized spline model, which could fit trajectory data well and smooth noise. The penalized parameter λ controlling the smooth and fitting accuracy could be estimated by generalized cross-validation. Five other trajectory models, including third-order polynomials, Chebyshev polynomials, linear interpolation, Lagrange interpolation and cubic spline, are compared with the penalized spline model. Both the sophisticated ephemeris and on-board ephemeris are used to compare the orbit models. The penalized spline model could smooth part of noise, and accuracy would decrease as the orbit length increases. The band-to-band misregistration of ZiYuan-3 Dengfeng and Faizabad multispectral images is used to evaluate the proposed method. With the Dengfeng dataset, the third-order polynomials and Chebyshev approximation could not model the oscillation, and introduce misregistration of 0.57 pixels misregistration in across-track direction and 0.33 pixels in along-track direction. With the Faizabad dataset, the linear interpolation, Lagrange interpolation and cubic spline model suffer from noise, introducing larger misregistration than the approximation models. Experimental results suggest the penalized spline model could model the oscillation and smooth noise.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data.
Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé
2016-01-01
Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis.
Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé
2016-01-01
Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418
Evaluation of the spline reconstruction technique for PET
Kastis, George A. Kyriakopoulou, Dimitra; Gaitanis, Anastasios; Fernández, Yolanda; Hutton, Brian F.; Fokas, Athanasios S.
2014-04-15
Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real
Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.
2007-02-02
PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data.
Improved Regression Calibration
ERIC Educational Resources Information Center
Skrondal, Anders; Kuha, Jouni
2012-01-01
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…
Prediction in Multiple Regression.
ERIC Educational Resources Information Center
Osborne, Jason W.
2000-01-01
Presents the concept of prediction via multiple regression (MR) and discusses the assumptions underlying multiple regression analyses. Also discusses shrinkage, cross-validation, and double cross-validation of prediction equations and describes how to calculate confidence intervals around individual predictions. (SLD)
Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-19
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.
Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings
Guo, Y.; Keller, J.; Errichello, R.; Halse, C.
2013-12-01
Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.
NASA Astrophysics Data System (ADS)
Chen, Enguo; Zhuang, Zhenfeng; Cai, Jin; Liu, Yan; Yu, Feihong
2012-10-01
This paper presents a segment and spline synthesis optimization method (SSS method) for the freeform total-internal-reflection (TIR) lens design. Before the optimization starts, a series of discrete control points are used to describe the TIR lens profile. In order to realize initial optimization, the segment method is applied to optimize a linear-segmented TIR lens. The final optimization is further achieved by the spline optimization method, after which the cubic-spline-modeling TIR lens with the characteristic of low cost and easy fabrication could satisfy the target illumination requirements. The detailed design principle and optimization process of the SSS method are both analyzed and compared in the paper. Complementing each other, the synthesis of the segment and spline optimization method could realize the prescribed design and greatly improve the design efficiency for designers. As an example, the specially designed polymethyl methacrylate (PMMA) freeform TIR lens used for LED general lighting could demonstrate the effectiveness of this method. The uniformity of the lens significantly increases from 67% to 88% after the segment and spline method, respectively. High light output efficiency (LOE) of 99.3% is available within the target illumination area for the final lens system. It is believed that the SSS method could be applied to design other freeform illumination optics.
BSR: B-spline atomic R-matrix codes
NASA Astrophysics Data System (ADS)
Zatsarinny, Oleg
2006-02-01
BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput
Regression problems for magnitudes
NASA Astrophysics Data System (ADS)
Castellaro, S.; Mulargia, F.; Kagan, Y. Y.
2006-06-01
Least-squares linear regression is so popular that it is sometimes applied without checking whether its basic requirements are satisfied. In particular, in studying earthquake phenomena, the conditions (a) that the uncertainty on the independent variable is at least one order of magnitude smaller than the one on the dependent variable, (b) that both data and uncertainties are normally distributed and (c) that residuals are constant are at times disregarded. This may easily lead to wrong results. As an alternative to least squares, when the ratio between errors on the independent and the dependent variable can be estimated, orthogonal regression can be applied. We test the performance of orthogonal regression in its general form against Gaussian and non-Gaussian data and error distributions and compare it with standard least-square regression. General orthogonal regression is found to be superior or equal to the standard least squares in all the cases investigated and its use is recommended. We also compare the performance of orthogonal regression versus standard regression when, as often happens in the literature, the ratio between errors on the independent and the dependent variables cannot be estimated and is arbitrarily set to 1. We apply these results to magnitude scale conversion, which is a common problem in seismology, with important implications in seismic hazard evaluation, and analyse it through specific tests. Our analysis concludes that the commonly used standard regression may induce systematic errors in magnitude conversion as high as 0.3-0.4, and, even more importantly, this can introduce apparent catalogue incompleteness, as well as a heavy bias in estimates of the slope of the frequency-magnitude distributions. All this can be avoided by using the general orthogonal regression in magnitude conversions.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Towards a More General Type of Univariate Constrained Interpolation with Fractal Splines
NASA Astrophysics Data System (ADS)
Chand, A. K. B.; Viswanathan, P.; Reddy, K. M.
2015-09-01
Recently, in [Electron. Trans. Numer. Anal. 41 (2014) 420-442] authors introduced a new class of rational cubic fractal interpolation functions with linear denominators via fractal perturbation of traditional nonrecursive rational cubic splines and investigated their basic shape preserving properties. The main goal of the current paper is to embark on univariate constrained fractal interpolation that is more general than what was considered so far. To this end, we propose some strategies for selecting the parameters of the rational fractal spline so that the interpolating curves lie strictly above or below a prescribed linear or a quadratic spline function. Approximation property of the proposed rational cubic fractal spine is broached by using the Peano kernel theorem as an interlude. The paper also provides an illustration of background theory, veined by examples.
Modeling of complex-valued wiener systems using B-spline neural network.
Hong, Xia; Chen, Sheng
2011-05-01
In this brief, a new complex-valued B-spline neural network is introduced in order to model the complex-valued Wiener system using observational input/output data. The complex-valued nonlinear static function in the Wiener system is represented using the tensor product from two univariate B-spline neural networks, using the real and imaginary parts of the system input. Following the use of a simple least squares parameter initialization scheme, the Gauss-Newton algorithm is applied for the parameter estimation, which incorporates the De Boor algorithm, including both the B-spline curve and the first-order derivatives recursion. Numerical examples, including a nonlinear high-power amplifier model in communication systems, are used to demonstrate the efficacy of the proposed approaches. PMID:21550875
Multivariate Regression with Calibration*
Liu, Han; Wang, Lie; Zhao, Tuo
2014-01-01
We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861
Inference in dynamic systems using B-splines and quasilinearized ODE penalties.
Frasso, Gianluca; Jaeger, Jonathan; Lambert, Philippe
2016-05-01
Nonlinear (systems of) ordinary differential equations (ODEs) are common tools in the analysis of complex one-dimensional dynamic systems. We propose a smoothing approach regularized by a quasilinearized ODE-based penalty. Within the quasilinearized spline-based framework, the estimation reduces to a conditionally linear problem for the optimization of the spline coefficients. Furthermore, standard ODE compliance parameter(s) selection criteria are applicable. We evaluate the performances of the proposed strategy through simulated and real data examples. Simulation studies suggest that the proposed procedure ensures more accurate estimates than standard nonlinear least squares approaches when the state (initial and/or boundary) conditions are not known. PMID:26602190
A compressed primal-dual method for generating bivariate cubic L1 splines
NASA Astrophysics Data System (ADS)
Wang, Yong; Fang, Shu-Cherng; Lavery, John E.
2007-04-01
In this paper, we develop a compressed version of the primal-dual interior point method for generating bivariate cubic L1 splines. Discretization of the underlying optimization model, which is a nonsmooth convex programming problem, leads to an overdetermined linear system that can be handled by interior point methods. Taking advantage of the special matrix structure of the cubic L1 spline problem, we design a compressed primal-dual interior point algorithm. Computational experiments indicate that this compressed primal-dual method is robust and is much faster than the ordinary (uncompressed) primal-dual interior point algorithm.
Trigonometric quadratic B-spline subdomain Galerkin algorithm for the Burgers' equation
NASA Astrophysics Data System (ADS)
Ay, Buket; Dag, Idris; Gorgulu, Melis Zorsahin
2015-12-01
A variant of the subdomain Galerkin method has been set up to find numerical solutions of the Burgers' equation. Approximate function consists of the combination of the trigonometric B-splines. Integration of Burgers' equation has been achived by aid of the subdomain Galerkin method based on the trigonometric B-splines as an approximate functions. The resulting first order ordinary differential system has been converted into an iterative algebraic equation by use of the Crank-Nicolson method at successive two time levels. The suggested algorithm is tested on somewell-known problems for the Burgers' equation.
Splines and the Galerkin method for solving the integral equations of scattering theory
NASA Astrophysics Data System (ADS)
Brannigan, M.; Eyre, D.
1983-06-01
This paper investigates the Galerkin method with cubic B-spline approximants to solve singular integral equations that arise in scattering theory. We stress the relationship between the Galerkin and collocation methods.The error bound for cubic spline approximates has a convergence rate of O(h4), where h is the mesh spacing. We test the utility of the Galerkin method by solving both two- and three-body problems. We demonstrate, by solving the Amado-Lovelace equation for a system of three identical bosons, that our numerical treatment of the scattering problem is both efficient and accurate for small linear systems.
Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations
Kim, Sang Dong
1996-12-31
In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).
Metamorphic geodesic regression.
Hong, Yi; Joshi, Sarang; Sanchez, Mar; Styner, Martin; Niethammer, Marc
2012-01-01
We propose a metamorphic geodesic regression approach approximating spatial transformations for image time-series while simultaneously accounting for intensity changes. Such changes occur for example in magnetic resonance imaging (MRI) studies of the developing brain due to myelination. To simplify computations we propose an approximate metamorphic geodesic regression formulation that only requires pairwise computations of image metamorphoses. The approximated solution is an appropriately weighted average of initial momenta. To obtain initial momenta reliably, we develop a shooting method for image metamorphosis.
Revisiting Regression in Autism: Heller's "Dementia Infantilis"
ERIC Educational Resources Information Center
Westphal, Alexander; Schelinski, Stefanie; Volkmar, Fred; Pelphrey, Kevin
2013-01-01
Theodor Heller first described a severe regression of adaptive function in normally developing children, something he termed dementia infantilis, over one 100 years ago. Dementia infantilis is most closely related to the modern diagnosis, childhood disintegrative disorder. We translate Heller's paper, Uber Dementia Infantilis, and discuss…
Tarpey, Thaddeus; Petkova, Eva
2010-07-01
Finite mixture models have come to play a very prominent role in modelling data. The finite mixture model is predicated on the assumption that distinct latent groups exist in the population. The finite mixture model therefore is based on a categorical latent variable that distinguishes the different groups. Often in practice distinct sub-populations do not actually exist. For example, disease severity (e.g. depression) may vary continuously and therefore, a distinction of diseased and not-diseased may not be based on the existence of distinct sub-populations. Thus, what is needed is a generalization of the finite mixture's discrete latent predictor to a continuous latent predictor. We cast the finite mixture model as a regression model with a latent Bernoulli predictor. A latent regression model is proposed by replacing the discrete Bernoulli predictor by a continuous latent predictor with a beta distribution. Motivation for the latent regression model arises from applications where distinct latent classes do not exist, but instead individuals vary according to a continuous latent variable. The shapes of the beta density are very flexible and can approximate the discrete Bernoulli distribution. Examples and a simulation are provided to illustrate the latent regression model. In particular, the latent regression model is used to model placebo effect among drug treated subjects in a depression study. PMID:20625443
Semiparametric Regression Pursuit.
Huang, Jian; Wei, Fengrong; Ma, Shuangge
2012-10-01
The semiparametric partially linear model allows flexible modeling of covariate effects on the response variable in regression. It combines the flexibility of nonparametric regression and parsimony of linear regression. The most important assumption in the existing methods for the estimation in this model is to assume a priori that it is known which covariates have a linear effect and which do not. However, in applied work, this is rarely known in advance. We consider the problem of estimation in the partially linear models without assuming a priori which covariates have linear effects. We propose a semiparametric regression pursuit method for identifying the covariates with a linear effect. Our proposed method is a penalized regression approach using a group minimax concave penalty. Under suitable conditions we show that the proposed approach is model-pursuit consistent, meaning that it can correctly determine which covariates have a linear effect and which do not with high probability. The performance of the proposed method is evaluated using simulation studies, which support our theoretical results. A real data example is used to illustrated the application of the proposed method. PMID:23559831
[Understanding logistic regression].
El Sanharawi, M; Naudet, F
2013-10-01
Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.
A configurable B-spline parameterization method for structural optimization of wing boxes
NASA Astrophysics Data System (ADS)
Yu, Alan Tao
2009-12-01
This dissertation presents a synthesis of methods for structural optimization of aircraft wing boxes. The optimization problem considered herein is the minimization of structural weight with respect to component sizes, subject to stress constraints. Different aspects of structural optimization methods representing the current state-of-the-art are discussed, including sequential quadratic programming, sensitivity analysis, parameterization of design variables, constraint handling, and multiple load treatment. Shortcomings of the current techniques are identified and a B-spline parameterization representing the structural sizes is proposed to address them. A new configurable B-spline parameterization method for structural optimization of wing boxes is developed that makes it possible to flexibly explore design spaces. An automatic scheme using different levels of B-spline parameterization configurations is also proposed, along with a constraint aggregation method in order to reduce the computational effort. Numerical results are compared to evaluate the effectiveness of the B-spline approach and the constraint aggregation method. To evaluate the new formulations and explore design spaces, the wing box of an airliner is optimized for the minimum weight subject to stress constraints under multiple load conditions. The new approaches are shown to significantly reduce the computational time required to perform structural optimization and to yield designs that are more realistic than existing methods.
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
Practical Session: Logistic Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.
NASA Astrophysics Data System (ADS)
Erdogan, Eren; Durmaz, Murat; Liang, Wenjing; Kappelsberger, Maria; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian
2015-04-01
This project focuses on the development of a novel near real-time data adaptive filtering framework for global modeling of the vertical total electron content (VTEC). Ionospheric data can be acquired from various space geodetic observation techniques such as GNSS, altimetry, DORIS and radio occultation. The project aims to model the temporal and spatial variations of the ionosphere by a combination of these techniques in an adaptive data assimilation framework, which utilizes appropriate basis functions to represent the VTEC. The measurements naturally have inhomogeneous data distribution both in time and space. Therefore, integrating the aforementioned observation techniques into data adaptive basis selection methods (e.g. Multivariate Adaptive Regression B-Splines) with recursive filtering (e.g. Kalman filtering) to model the daily global ionosphere may deliver important improvements over classical estimation methods. Since ionospheric inverse problems are ill-posed, a suitable regularization procedure might stabilize the solution. In this contribution we present first results related to the selected evaluation procedure. Comparisons made with respect to applicability, efficiency, accuracy, and numerical efforts.
Variable Selection in ROC Regression
2013-01-01
Regression models are introduced into the receiver operating characteristic (ROC) analysis to accommodate effects of covariates, such as genes. If many covariates are available, the variable selection issue arises. The traditional induced methodology separately models outcomes of diseased and nondiseased groups; thus, separate application of variable selections to two models will bring barriers in interpretation, due to differences in selected models. Furthermore, in the ROC regression, the accuracy of area under the curve (AUC) should be the focus instead of aiming at the consistency of model selection or the good prediction performance. In this paper, we obtain one single objective function with the group SCAD to select grouped variables, which adapts to popular criteria of model selection, and propose a two-stage framework to apply the focused information criterion (FIC). Some asymptotic properties of the proposed methods are derived. Simulation studies show that the grouped variable selection is superior to separate model selections. Furthermore, the FIC improves the accuracy of the estimated AUC compared with other criteria. PMID:24312135
Trajectory control of an articulated robot with a parallel drive arm based on splines under tension
NASA Astrophysics Data System (ADS)
Yi, Seung-Jong
Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…
Modern Regression Discontinuity Analysis
ERIC Educational Resources Information Center
Bloom, Howard S.
2012-01-01
This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…
Multiple linear regression analysis
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
Mechanisms of neuroblastoma regression
Brodeur, Garrett M.; Bagatell, Rochelle
2014-01-01
Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179
M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
Small and large deformation analysis with the p- and B-spline versions of the Finite Cell Method
NASA Astrophysics Data System (ADS)
Schillinger, Dominik; Ruess, Martin; Zander, Nils; Bazilevs, Yuri; Düster, Alexander; Rank, Ernst
2012-10-01
The Finite Cell Method (FCM) is an embedded domain method, which combines the fictitious domain approach with high-order finite elements, adaptive integration, and weak imposition of unfitted Dirichlet boundary conditions. For smooth problems, FCM has been shown to achieve exponential rates of convergence in energy norm, while its structured cell grid guarantees simple mesh generation irrespective of the geometric complexity involved. The present contribution first unhinges the FCM concept from a special high-order basis. Several benchmarks of linear elasticity and a complex proximal femur bone with inhomogeneous material demonstrate that for small deformation analysis, FCM works equally well with basis functions of the p-version of the finite element method or high-order B-splines. Turning to large deformation analysis, it is then illustrated that a straightforward geometrically nonlinear FCM formulation leads to the loss of uniqueness of the deformation map in the fictitious domain. Therefore, a modified FCM formulation is introduced, based on repeated deformation resetting, which assumes for the fictitious domain the deformation-free reference configuration after each Newton iteration. Numerical experiments show that this intervention allows for stable nonlinear FCM analysis, preserving the full range of advantages of linear elastic FCM, in particular exponential rates of convergence. Finally, the weak imposition of unfitted Dirichlet boundary conditions via the penalty method, the robustness of FCM under severe mesh distortion, and the large deformation analysis of a complex voxel-based metal foam are addressed.
Ridge Regression Signal Processing
NASA Technical Reports Server (NTRS)
Kuhl, Mark R.
1990-01-01
The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
Tenderholt, A.; Hedman, B.; Hodgson, K.O.
2007-01-08
PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k{sup 3}-weighted EXAFS data.
Two-dimensional mesh embedding for Galerkin B-spline methods
NASA Technical Reports Server (NTRS)
Shariff, Karim; Moser, Robert D.
1995-01-01
A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.
B-spline parameterization of spatial response in a monolithic scintillation camera
NASA Astrophysics Data System (ADS)
Solovov, V.; Morozov, A.; Chepel, V.; Domingos, V.; Martins, R.
2016-09-01
A framework for parameterization of the light response functions (LRFs) in a scintillation camera is presented. It is based on approximation of the measured or simulated photosensor response with weighted sums of uniform cubic B-splines or their tensor products. The LRFs represented in this way are smooth, computationally inexpensive to evaluate and require much less computer memory than non-parametric alternatives. The parameters are found in a straightforward way by the linear least squares method. Several techniques that allow to reduce the storage and processing power requirements were developed. A software library for fitting simulated and measured light response with spline functions was developed and integrated into an open source software package ANTS2 designed for simulation and data processing for Anger camera type detectors.
B-splines and Hermite-Padé approximants to the exponential function
NASA Astrophysics Data System (ADS)
Sablonnière, Paul
2008-10-01
This paper is the continuation of a work initiated in [P. Sablonnière, An algorithm for the computation of Hermite-Padé approximations to the exponential function: divided differences and Hermite-Padé forms. Numer. Algorithms 33 (2003) 443-452] about the computation of Hermite-Padé forms (HPF) and associated Hermite-Padé approximants (HPA) to the exponential function. We present an alternative algorithm for their computation, based on the representation of HPF in terms of integral remainders with B-splines as Peano kernels. Using the good properties of discrete B-splines, this algorithm gives rise to a great variety of representations of HPF of higher orders in terms of HPF of lower orders, and in particular of classical Padé forms. We give some examples illustrating this algorithm, in particular, another way of constructing quadratic HPF already described by different authors. Finally, we briefly study a family of cubic HPF.
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2015-03-01
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov's proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
Convergence of a Fourier-spline representation for the full-turn map generator
Warnock, R.L.; Ellison, J.A.
1997-04-01
Single-turn data from a symplectic tracking code can be used to construct a canonical generator for a full-turn symplectic map. This construction has been carried out numerically in canonical polar coordinates, the generator being obtained as a Fourier series in angle coordinates with coefficients that are spline functions of action coordinates. Here the authors provide a mathematical basis for the procedure, finding sufficient conditions for the existence of the generator and convergence of the Fourier-spline expansion. The analysis gives insight concerning analytic properties of the generator, showing that in general there are branch points as a function of angle and inverse square root singularities at the origin as a function of action.
Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.
Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco
2015-04-20
Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.
Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
Optimal rocket thrust profile shaping using third degree spline function interpolation
NASA Technical Reports Server (NTRS)
Johnson, I. L.
1974-01-01
Optimal solid-rocket thrust profiles for the parallel-burn, solid-rocket-assisted space shuttle are investigated. Solid-rocket thrust profiles are simulated by using third-degree spline functions, with the values of the thrust ordinates defined as parameters. The profiles are optimized parametrically, using the Davidon-Fletcher-Powell penalty function method, by minimizing propellant weight subject to state and control inequality constraints and to terminal boundary conditions. This study shows that optimizing a control variable parametrically by using third-degree spline function interpolation allows the control to be shaped so that inequality constraints are strictly adhered to and all corners are eliminated. The absence of corners, which is realistic in nature, makes this method attractive from the viewpoint of solid rocket grain design.
The Design and Characterization of Wideband Spline-profiled Feedhorns for Advanced Actpol
NASA Technical Reports Server (NTRS)
Simon, Sara M.; Austermann, Jason; Beall, James A.; Choi, Steve K.; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Henderson, Shawn W.; Hills, Felicity B.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Josaitis, Alec; Koopman, Brian J.; McMahon, Jeff J.; Nati, Federico; Newburgh, Laura; Niemack, Michael D.; Salatino, Maria; Schillaci, Alessandro; Wollack, Edward J.
2016-01-01
Advanced ACTPol (AdvACT) is an upgraded camera for the Atacama Cosmology Telescope (ACT) that will measure the cosmic microwave background in temperature and polarization over a wide range of angular scales and five frequency bands from 28-230 GHz. AdvACT will employ four arrays of feedhorn-coupled, polarization- sensitive multichroic detectors. To accommodate the higher pixel packing densities necessary to achieve Ad- vACTs sensitivity goals, we have developed and optimized wideband spline-profiled feedhorns for the AdvACT multichroic arrays that maximize coupling efficiency while carefully controlling polarization systematics. We present the design, fabrication, and testing of wideband spline-profiled feedhorns for the multichroic arrays of AdvACT.
A new wavelet-based thin plate element using B-spline wavelet on the interval
NASA Astrophysics Data System (ADS)
Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang
2008-01-01
By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Vibration of shear-deformable rectangular plates using a spline-function Rayleigh-Ritz approach
NASA Astrophysics Data System (ADS)
Wang, S.; Dawe, D. J.
1993-02-01
The prediction of the natural frequencies of vibration of rectangular plates or orthotropic laminates is described, through the use of B-spline functions as trial functions in a Rayleigh Ritz approach. Through-thickness shear deformation effects are included in the analysis and hence assumptions have to be made for the spatial variation over the plate middle surface of each of the lateral deflection and the two rotation components. Two versions of the spline-function Rayleigh-Ritz approach are described: in one of these the deflection and rotations are represented by functions of the same polynomial order, while in the other a lower-order representation is used for each rotation component in one of the coordinate directions. It is shown in a number of applications that the former version leads to shear-locking behavior while the latter version avoids this behavior and is suitable for the analysis of both thick and thin plates.
Left ventricular motion reconstruction with a prolate spheroidal B-spline model
NASA Astrophysics Data System (ADS)
Li, Jin; Denney, Thomas S., Jr.
2006-02-01
Tagged cardiac magnetic resonance (MR) imaging can non-invasively image deformation of the left ventricular (LV) wall. Three-dimensional (3D) analysis of tag data requires fitting a deformation model to tag lines in the image data. In this paper, we present a 3D myocardial displacement and strain reconstruction method based on a B-spline deformation model defined in prolate spheroidal coordinates, which more closely matches the shape of the LV wall than existing Cartesian or cylindrical coordinate models. The prolate spheroidal B-spline (PSB) deformation model also enforces smoothness across and can compute strain at the apex. The PSB reconstruction algorithm was evaluated on a previously published data set to allow head-to-head comparison of the PSB model with existing LV deformation reconstruction methods. We conclude that the PSB method can accurately reconstruct deformation and strain in the LV wall from tagged MR images and has several advantages relative to existing techniques.
Surface evaluation with Ronchi test by using Malacara formula, genetic algorithms, and cubic splines
NASA Astrophysics Data System (ADS)
Cordero-Dávila, Alberto; González-García, Jorge
2010-08-01
In the manufacturing process of an optical surface with rotational symmetry the ideal ronchigram is simulated and compared with the experimental ronchigram. From this comparison the technician, based on your experience, estimated the error on the surface. Quantitatively, the error on the surface can be described by a polynomial e(ρ2) and the coefficients can be estimated from data of the ronchigrams (real and ideal) to solve a system of nonlinear differential equations which are related to the Malacara formula of the transversal aberration. To avoid the problems inherent in the use of polynomials it proposed to describe the errors on the surface by means of cubic splines. The coefficients of each spline are estimated from a discrete set of errors (ρi,ei) and these are evaluated by means of genetic algorithms to reproduce the experimental ronchigrama starting from the ideal.
NASA Technical Reports Server (NTRS)
Wahba, G.
1982-01-01
Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.
Estimation of Some Parameters from Morse-Morse-Spline-Van Der Waals Intermolecular Potential
Coroiu, I.
2007-04-23
Some parameters such as transport cross-sections and isotopic thermal diffusion factor have been calculated from an improved intermolecular potential, Morse-Morse-Spline-van der Waals (MMSV) potential proposed by R.A. Aziz et al. The treatment was completely classical and no corrections for quantum effects were made. The results would be employed for isotope separations of different spherical and quasi-spherical molecules.
Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method
NASA Technical Reports Server (NTRS)
Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.
1997-01-01
A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.
Analysis of myocardial motion using generalized spline models and tagged magnetic resonance images
NASA Astrophysics Data System (ADS)
Chen, Fang; Rose, Stephen E.; Wilson, Stephen J.; Veidt, Martin; Bennett, Cameron J.; Doddrell, David M.
2000-06-01
Heart wall motion abnormalities are the very sensitive indicators of common heart diseases, such as myocardial infarction and ischemia. Regional strain analysis is especially important in diagnosing local abnormalities and mechanical changes in the myocardium. In this work, we present a complete method for the analysis of cardiac motion and the evaluation of regional strain in the left ventricular wall. The method is based on the generalized spline models and tagged magnetic resonance images (MRI) of the left ventricle. The whole method combines dynamical tracking of tag deformation, simulating cardiac movement and accurately computing the regional strain distribution. More specifically, the analysis of cardiac motion is performed in three stages. Firstly, material points within the myocardium are tracked over time using a semi-automated snake-based tag tracking algorithm developed for this purpose. This procedure is repeated in three orthogonal axes so as to generate a set of one-dimensional sample measurements of the displacement field. The 3D-displacement field is then reconstructed from this sample set by using a generalized vector spline model. The spline reconstruction of the displacement field is explicitly expressed as a linear combination of a spline kernel function associated with each sample point and a polynomial term. Finally, the strain tensor (linear or nonlinear) with three direct components and three shear components is calculated by applying a differential operator directly to the displacement function. The proposed method is computationally effective and easy to perform on tagged MR images. The preliminary study has shown potential advantages of using this method for the analysis of myocardial motion and the quantification of regional strain.
Quadratic spline collocation and parareal deferred correction method for parabolic PDEs
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, Yan; Li, Rongjian
2016-06-01
In this paper, we consider a linear parabolic PDE, and use optimal quadratic spline collocation (QSC) methods for the space discretization, proceed the parareal technique on the time domain. Meanwhile, deferred correction technique is used to improve the accuracy during the iterations. The error estimation is presented and the stability is analyzed. Numerical experiments, which is carried out on a parallel computer with 40 CPUs, are attached to exhibit the effectiveness of the hybrid algorithm.
Cubic spline reflectance estimates using the Viking lander camera multispectral data
NASA Technical Reports Server (NTRS)
Park, S. K.; Huck, F. O.
1976-01-01
A technique was formulated for constructing spectral reflectance estimates from multispectral data obtained with the Viking lander cameras. The output of each channel was expressed as a linear function of the unknown spectral reflectance producing a set of linear equations which were used to determine the coefficients in a representation of the spectral reflectance estimate as a natural cubic spline. The technique was used to produce spectral reflectance estimates for a variety of actual and hypothetical spectral reflectances.
The trigonometric interpolation spline surface and its application in image zooming
NASA Astrophysics Data System (ADS)
Li, Juncheng; Yang, Lian
2015-07-01
The trigonometric polynomial spline surface generated over the space {1, sint, cost, sin2t, cos2t} is presented in this work. The proposed surface can automatically interpolate all the given data points and satisfy C2 continuous without solving equation systems. Then, image zooming making use of the proposed surface is investigated. Experimental results show that the proposed surface is effective for dealing with image zooming problems.
Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.
Oliveira, Francisco P M; Tavares, João Manuel R S
2013-03-01
This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p < 0.001) than the one obtained using the best solution proposed in our previous work. When applied to align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p < 0.001). The consequences of the temporal alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.
Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions
Jerome Blair
2008-05-15
An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.
Kramer, S.
1996-12-31
In many real-world domains the task of machine learning algorithms is to learn a theory for predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with nondeterminate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems. SRT integrates the statistical method of regression trees into ILP. It constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy.
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
A flexible B-spline model for multiple longitudinal biomarkers and survival.
Brown, Elizabeth R; Ibrahim, Joseph G; DeGruttola, Victor
2005-03-01
Often when jointly modeling longitudinal and survival data, we are interested in a multivariate longitudinal measure that may not fit well by linear models. To overcome this problem, we propose a joint longitudinal and survival model that has a nonparametric model for the longitudinal markers. We use cubic B-splines to specify the longitudinal model and a proportional hazards model to link the longitudinal measures to the hazard. To fit the model, we use a Markov chain Monte Carlo algorithm. We select the number of knots for the cubic B-spline model using the Conditional Predictive Ordinate (CPO) and the Deviance Information Criterion (DIC). The method and model selection approach are validated in a simulation. We apply this method to examine the link between viral load, CD4 count, and time to event in data from an AIDS clinical trial. The cubic B-spline model provides a good fit to the longitudinal data that could not be obtained with simple parametric models. PMID:15737079
Comparing tongue shapes from ultrasound imaging using smoothing spline analysis of variance.
Davidson, Lisa
2006-07-01
Ultrasound imaging of the tongue is increasingly common in speech production research. However, there has been little standardization regarding the quantification and statistical analysis of ultrasound data. In linguistic studies, researchers may want to determine whether the tongue shape for an articulation under two different conditions (e.g., consonants in word-final versus word-medial position) is the same or different. This paper demonstrates how the smoothing spline ANOVA (SS ANOVA) can be applied to the comparison of tongue curves [Gu, Smoothing Spline ANOVA Models (Springer, New York, 2002)]. The SS ANOVA is a technique for determining whether or not there are significant differences between the smoothing splines that are the best fits for two data sets being compared. If the interaction term of the SS ANOVA model is statistically significant, then the groups have different shapes. Since the interaction may be significant even if only a small section of the curves are different (i.e., the tongue root is the same, but the tip of one group is raised), Bayesian confidence intervals are used to determine which sections of the curves are statistically different. SS ANOVAs are illustrated with some data comparing obstruents produced in word-final and word-medial coda position.
A nonrational B-spline profiled horn with high displacement amplification for ultrasonic welding.
Nguyen, Huu-Tu; Nguyen, Hai-Dang; Uan, Jun-Yen; Wang, Dung-An
2014-12-01
A new horn with high displacement amplification for ultrasonic welding is developed. The profile of the horn is a nonrational B-spline curve with an open uniform knot vector. The ultrasonic actuation of the horn exploits the first longitudinal displacement mode of the horn. The horn is designed by an optimization scheme and finite element analyses. Performances of the proposed horn have been evaluated by experiments. The displacement amplification of the proposed horn is 41.4% and 8.6% higher than that of the traditional catenoidal horn and a Bézier-profile horn, respectively, with the same length and end surface diameters. The developed horn has a lower displacement amplification than the nonuniform rational B-spline profiled horn but a much smoother stress distribution. The developed horn, the catenoidal horn, and the Bézier horn are fabricated and used for ultrasonic welding of lap-shear specimens. The bonding strength of the joints welded by the open uniform nonrational B-spline (OUNBS) horn is the highest among the three horns for the various welding parameters considered. The locations of the failure mode and the distribution of the voids of the specimens are investigated to explain the reason of the high bonding strength achieved by the OUNBS horn.
A mixed basis density functional approach for one-dimensional systems with B-splines
NASA Astrophysics Data System (ADS)
Ren, Chung-Yuan; Chang, Yia-Chung; Hsue, Chen-Shiung
2016-05-01
A mixed basis approach based on density functional theory is extended to one-dimensional (1D) systems. The basis functions here are taken to be the localized B-splines for the two finite non-periodic dimensions and the plane waves for the third periodic direction. This approach will significantly reduce the number of the basis and therefore is computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian. For 1D systems, B-spline polynomials are particularly useful and efficient in two-dimensional spatial integrations involved in the calculations because of their absolute localization. Moreover, B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With such a basis set we can directly calculate the total energy of the isolated system instead of using the conventional supercell model with artificial vacuum regions among the replicas along the two non-periodic directions. The spurious Coulomb interaction between the charged defect and its repeated images by the supercell approach for charged systems can also be avoided. A rigorous formalism for the long-range Coulomb potential of both neutral and charged 1D systems under the mixed basis scheme will be derived. To test the present method, we apply it to study the infinite carbon-dimer chain, graphene nanoribbon, carbon nanotube and positively-charged carbon-dimer chain. The resulting electronic structures are presented and discussed in detail.
Algebraic grid generation using tensor product B-splines. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Saunders, B. V.
1985-01-01
Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.
Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes
Garrett, W.R.
1984-03-06
A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller Belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. A prototype includes of this a bellows seal instead of the floating seal at the upper end of the tool, and a bellows in the side of the lubricant chamber provides volume compensation. A second lubricant chamber is provided below the pressure seal, the lower end of the second chamber being closed by a bellows seal and a further bellows in the side of the second chamber providing volume compensation. Modifications provide hydraulic jars.
Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes
Garrett, W.R.
1981-08-04
A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Preferably the spring means itself is a double acting compression spring means wherein the same spring means is compressed whether the joint is extended or contracted. The damper has a like low spring rate over a considerable range of deflection, both upon extension and contraction of the joint, but a gradually then rapidly increased spring rate upon approaching the travel limits in each direction. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The spring rings make only such line contact with one of the telescoping members as is required for guidance therefrom, and no contact with the other member. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. Magnetic and electrical means are provided to check for the presence and condition of the lubricant. To increase load capacity the spring means is made of a number of components acting in parallel.
NASA Technical Reports Server (NTRS)
Hastings, E. C., Jr.; Shanks, R. E.; Champine, R. A.; Copeland, W. L.; Young, D. C.
1974-01-01
Flight tests have been conducted to evaluate the effectiveness of a wingtip vortex attenuating device, referred to as a spline. Vortex penetrations were made with a PA-28 behind a C-54 aircraft with and without wingtip splines attached and the resultant rolling acceleration was measured and related to the roll acceleration capability of the PA-28. Tests were conducted over a range of separation distances from about 5 nautical miles (n. mi.) to less than 1 n. mi. Preliminary results indicate that, with the splines installed, there was a significant reduction in the vortex induced roll acceleration experienced by the PA-28 probe aircraft, and the distance at which the PA-28 roll control became ineffective was reduced from 2.5 n. mi. to 0.6 n. mi., or less. There was a slight increase in approach noise (approximately 4 db) with the splines installed due primarily to the higher engine power used during approach. Although splines significantly reduced the C-54 rate of climb, the rates available with four engines were acceptable for this test program. Splines did not introduce any noticeable change in the handling qualities of the C-54.
Ruberti, M.; Averbukh, V.; Decleva, P.
2014-10-28
We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.
CSWS-related autistic regression versus autistic regression without CSWS.
Tuchman, Roberto
2009-08-01
Continuous spike-waves during slow-wave sleep (CSWS) and Landau-Kleffner syndrome (LKS) are two clinical epileptic syndromes that are associated with the electroencephalography (EEG) pattern of electrical status epilepticus during slow wave sleep (ESES). Autistic regression occurs in approximately 30% of children with autism and is associated with an epileptiform EEG in approximately 20%. The behavioral phenotypes of CSWS, LKS, and autistic regression overlap. However, the differences in age of regression, degree and type of regression, and frequency of epilepsy and EEG abnormalities suggest that these are distinct phenotypes. CSWS with autistic regression is rare, as is autistic regression associated with ESES. The pathophysiology and as such the treatment implications for children with CSWS and autistic regression are distinct from those with autistic regression without CSWS.
Streamflow forecasting using functional regression
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.
2016-07-01
Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Wild bootstrap for quantile regression.
Feng, Xingdong; He, Xuming; Hu, Jianhua
2011-12-01
The existing theory of the wild bootstrap has focused on linear estimators. In this note, we broaden its validity by providing a class of weight distributions that is asymptotically valid for quantile regression estimators. As most weight distributions in the literature lead to biased variance estimates for nonlinear estimators of linear regression, we propose a modification of the wild bootstrap that admits a broader class of weight distributions for quantile regression. A simulation study on median regression is carried out to compare various bootstrap methods. With a simple finite-sample correction, the wild bootstrap is shown to account for general forms of heteroscedasticity in a regression model with fixed design points.
Transfer Learning Based on Logistic Regression
NASA Astrophysics Data System (ADS)
Paul, A.; Rottensteiner, F.; Heipke, C.
2015-08-01
In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1975-01-01
Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.
Gutierrez, Juan B; Lai, Ming-Jun; Slavov, George
2015-12-01
We study a time dependent partial differential equation (PDE) which arises from classic models in ecology involving logistic growth with Allee effect by introducing a discrete weak solution. Existence, uniqueness and stability of the discrete weak solutions are discussed. We use bivariate splines to approximate the discrete weak solution of the nonlinear PDE. A computational algorithm is designed to solve this PDE. A convergence analysis of the algorithm is presented. We present some simulations of population development over some irregular domains. Finally, we discuss applications in epidemiology and other ecological problems.
On spline and polynomial interpolation of low earth orbiter data: GRACE example
NASA Astrophysics Data System (ADS)
Uz, Metehan; Ustun, Aydin
2016-04-01
GRACE satellites, which are equipped with specific science instruments such as K/Ka band ranging system, have still orbited around the earth since 17 March 2002. In this study the kinematic and reduced-dynamic orbits of GRACE-A/B were determined to 10 seconds interval by using Bernese 5.2 GNSS software during May, 2010 and also daily orbit solutions were validated with GRACE science orbit, GNV1B. The RMS values of kinematic and reduced-dynamic orbit validations were about 2.5 and 1.5 cm, respectively. Throughout the time period of interest, more or less data gaps were encountered in the kinematic orbits due to lack of GPS measurements and satellite manoeuvres. Thus, the least square polynomial and the cubic spline approaches (natural, not-a-knot and clamped) were tested to interpolate both small data gaps and 5 second interval on precise orbits. The latter is necessary for example in case of data densification in order to use the K / Ka band observations. The interpolated coordinates to 5 second intervals were also validated with GNV1B orbits. The validation results show that spline approaches have delivered approximately 1 cm RMS values and are better than those of least square polynomial interpolation. When data gaps occur on daily orbit, the spline validation results became worse depending on the size of the data gaps. Hence, the daily orbits were fragmented into small arcs including 30, 40 or 50 knots to evaluate effect of the least square polynomial interpolation on data gaps. From randomly selected daily arc sets, which are belonging to different times, 5, 10, 15 and 20 knots were removed, independently. While 30-knot arcs were evaluated with fifth-degree polynomial, sixth-degree polynomial was employed to interpolate artificial gaps over 40- and 50-knot arcs. The differences of interpolated and removed coordinates were tested with each other by considering GNV1B validation RMS result, 2.5 cm. With 95% confidence level, data gaps up to 5 and 10 knots can
Numerical solution of the controlled Duffing oscillator by semi-orthogonal spline wavelets
NASA Astrophysics Data System (ADS)
Lakestani, M.; Razzaghi, M.; Dehghan, M.
2006-09-01
This paper presents a numerical method for solving the controlled Duffing oscillator. The method can be extended to nonlinear calculus of variations and optimal control problems. The method is based upon compactly supported linear semi-orthogonal B-spline wavelets. The differential and integral expressions which arise in the system dynamics, the performance index and the boundary conditions are converted into some algebraic equations which can be solved for the unknown coefficients. Illustrative examples are included to demonstrate the validity and applicability of the technique.
A counterexample concerning the L_2 -projector onto linear spline spaces
NASA Astrophysics Data System (ADS)
Oswald, Peter
2008-03-01
For the L_2 -orthogonal projection P_V onto spaces of linear splines over simplicial partitions in polyhedral domains in mathbb{R}^d , d>1 , we show that in contrast to the one-dimensional case, where Vert P_VVert _{L_inftyto L_infty} le 3 independently of the nature of the partition, in higher dimensions the L_infty -norm of P_V cannot be bounded uniformly with respect to the partition. This fact is folklore among specialists in finite element methods and approximation theory but seemingly has never been formally proved.
Spline Driven: High Accuracy Projectors for Tomographic Reconstruction From Few Projections.
Momey, Fabien; Denis, Loïc; Burnier, Catherine; Thiébaut, Éric; Becker, Jean-Marie; Desbat, Laurent
2015-12-01
Tomographic iterative reconstruction methods need a very thorough modeling of data. This point becomes critical when the number of available projections is limited. At the core of this issue is the projector design, i.e., the numerical model relating the representation of the object of interest to the projections on the detector. Voxel driven and ray driven projection models are widely used for their short execution time in spite of their coarse approximations. Distance driven model has an improved accuracy but makes strong approximations to project voxel basis functions. Cubic voxel basis functions are anisotropic, accurately modeling their projection is, therefore, computationally expensive. Both smoother and more isotropic basis functions better represent the continuous functions and provide simpler projectors. These considerations have led to the development of spherically symmetric volume elements, called blobs. Set apart their isotropy, blobs are often considered too computationally expensive in practice. In this paper, we consider using separable B-splines as basis functions to represent the object, and we propose to approximate the projection of these basis functions by a 2D separable model. When the degree of the B-splines increases, their isotropy improves and projections can be computed regardless of their orientation. The degree and the sampling of the B-splines can be chosen according to a tradeoff between approximation quality and computational complexity. We quantitatively measure the good accuracy of our model and compare it with other projectors, such as the distance-driven and the model proposed by Long et al. From the numerical experiments, we demonstrate that our projector with an improved accuracy better preserves the quality of the reconstruction as the number of projections decreases. Our projector with cubic B-splines requires about twice as many operations as a model based on voxel basis functions. Higher accuracy projectors can be used to
Power spectral density estimation by spline smoothing in the frequency domain
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Thompson, J. R.
1972-01-01
An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying FFT techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.
Power spectral density estimation by spline smoothing in the frequency domain.
NASA Technical Reports Server (NTRS)
De Figueiredo, R. J. P.; Thompson, J. R.
1972-01-01
An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying Fast Fourier Transform techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.-
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
An adaptive interpolation scheme for molecular potential energy surfaces
NASA Astrophysics Data System (ADS)
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
An adaptive interpolation scheme for molecular potential energy surfaces.
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-28
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task-especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version. PMID:27586901
An adaptive interpolation scheme for molecular potential energy surfaces.
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-28
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task-especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Quantile regression for climate data
NASA Astrophysics Data System (ADS)
Marasinghe, Dilhani Shalika
Quantile regression is a developing statistical tool which is used to explain the relationship between response and predictor variables. This thesis describes two examples of climatology using quantile regression.Our main goal is to estimate derivatives of a conditional mean and/or conditional quantile function. We introduce a method to handle autocorrelation in the framework of quantile regression and used it with the temperature data. Also we explain some properties of the tornado data which is non-normally distributed. Even though quantile regression provides a more comprehensive view, when talking about residuals with the normality and the constant variance assumption, we would prefer least square regression for our temperature analysis. When dealing with the non-normality and non constant variance assumption, quantile regression is a better candidate for the estimation of the derivative.
NASA Astrophysics Data System (ADS)
Mizuno, Sadao; Morita, Tetuya; Ariura, Yasutsune
For highly accurate and highly efficient tooth surface finishing, a vitrified cubic boron nitride (CBN) wheel or an electro-deposited CBN wheel exhibits superior performance. Honing by an involute spline tooth meshing is effective due to the generating motion. However, a lack of wheel rigidity and an inadequate feed motion of the wheel tend to reduce the finishing performance. As a result, it is difficult to keep the tooth surface smooth. In this study, the finishing with a vitrified CBN wheel is carried out using a new honing tool. The finishing performance is compared with that obtained using an electro-deposited wheel, and the finishing is carried out by braking an internal spline axis. The influence of different feed methods is investigated on the roughness of the finished tooth surface. The finishing using a vitrified CBN wheel and braking an internal spline axis shows superior performance.
NASA Astrophysics Data System (ADS)
Dawe, D. J.; Wang, S.
A Rayleigh-Ritz method is presented for predicting the natural frequencies of flat rectangular laminates which can have arbitrary lay-up. The effects of through-thickness shear deformation are included in the analysis. The displacement field utilizes B-spline functions in what has been referred to in earlier work as a B(k,k-1)-spline Rayleigh-Ritz method and the approach is versatile in the specification of boundary conditions. The results of a number of applications are presented in the form of studies showing the convergence of frequency values with increase in the number of spline sections used. The analysis procedure is seen to have good convergence characteristics when dealing with laminates of thin and thick geometry.
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set.
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120
Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…
Retro-regression--another important multivariate regression improvement.
Randić, M
2001-01-01
We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA. PMID:11410035
Epidemiology of CKD Regression in Patients under Nephrology Care
Borrelli, Silvio; Leonardis, Daniela; Minutolo, Roberto; Chiodini, Paolo; De Nicola, Luca; Esposito, Ciro; Mallamaci, Francesca; Zoccali, Carmine; Conte, Giuseppe
2015-01-01
Chronic Kidney Disease (CKD) regression is considered as an infrequent renal outcome, limited to early stages, and associated with higher mortality. However, prevalence, prognosis and the clinical correlates of CKD regression remain undefined in the setting of nephrology care. This is a multicenter prospective study in 1418 patients with established CKD (eGFR: 60–15 ml/min/1.73m²) under nephrology care in 47 outpatient clinics in Italy from a least one year. We defined CKD regressors as a ΔGFR ≥0 ml/min/1.73 m2/year. ΔGFR was estimated as the absolute difference between eGFR measured at baseline and at follow up visit after 18–24 months, respectively. Outcomes were End Stage Renal Disease (ESRD) and overall-causes Mortality.391 patients (27.6%) were identified as regressors as they showed an eGFR increase between the baseline visit in the renal clinic and the follow up visit. In multivariate regression analyses the regressor status was not associated with CKD stage. Low proteinuria was the main factor associated with CKD regression, accounting per se for 48% of the likelihood of this outcome. Lower systolic blood pressure, higher BMI and absence of autosomal polycystic disease (PKD) were additional predictors of CKD regression. In regressors, ESRD risk was 72% lower (HR: 0.28; 95% CI 0.14–0.57; p<0.0001) while mortality risk did not differ from that in non-regressors (HR: 1.16; 95% CI 0.73–1.83; p = 0.540). Spline models showed that the reduction of ESRD risk associated with positive ΔGFR was attenuated in advanced CKD stage. CKD regression occurs in about one-fourth patients receiving renal care in nephrology units and correlates with low proteinuria, BP and the absence of PKD. This condition portends better renal prognosis, mostly in earlier CKD stages, with no excess risk for mortality. PMID:26462071
DBSR_HF: A B-spline Dirac-Hartree-Fock program
NASA Astrophysics Data System (ADS)
Zatsarinny, Oleg; Froese Fischer, Charlotte
2016-05-01
A B-spline version of a general Dirac-Hartree-Fock program is described. The usual differential equations are replaced by a set of generalized eigenvalue problems of the form (Ha -εa B) Pa = 0, where Ha and B are the Hamiltonian and overlap matrices, respectively, and Pa is the two-component relativistic orbit in the B-spline basis. A default universal grid allows for flexible adjustment to different nuclear models. When two orthogonal orbitals are both varied, the energy must also be stationary with respect to orthonormal transformations. At such a stationary point the off-diagonal Lagrange multipliers may be eliminated through projection operators. The self-consistent field procedure exhibits excellent convergence. Several atomic states can be considered simultaneously, including some configuration-interaction calculations. The program provides several options for the treatment of Breit interaction and QED corrections. The information about atoms up to Z = 104 is stored by the program. Along with a simple interface through command-line arguments, this information allows the user to run the program with minimal initial preparations.
Defining window-boundaries for genomic analyses using smoothing spline techniques
Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia
2015-04-17
High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the data and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.
Defining window-boundaries for genomic analyses using smoothing spline techniques
Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia
2015-04-17
High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore » and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less
Friedline, Terri; Masa, Rainier D; Chowa, Gina A N
2015-01-01
The natural log and categorical transformations commonly applied to wealth for meeting the statistical assumptions of research may not always be appropriate for adjusting for skewness given wealth's unique properties. Finding and applying appropriate transformations is becoming increasingly important as researchers consider wealth as a predictor of well-being. We present an alternative transformation-the inverse hyperbolic sine (IHS)-for simultaneously dealing with skewness and accounting for wealth's unique properties. Using the relationship between household wealth and youth's math achievement as an example, we apply the IHS transformation to wealth data from US and Ghanaian households. We also explore non-linearity and accumulation thresholds by combining IHS transformed wealth with splines. IHS transformed wealth relates to youth's math achievement similarly when compared to categorical and natural log transformations, indicating that it is a viable alternative to other transformations commonly used in research. Non-linear relationships and accumulation thresholds emerge that predict youth's math achievement when splines are incorporated. In US households, accumulating debt relates to decreases in math achievement whereas accumulating assets relates to increases in math achievement. In Ghanaian households, accumulating assets between the 25th and 50th percentiles relates to increases in youth's math achievement.
A spline approach to trial wave functions for variational and diffusion Monte Carlo
NASA Astrophysics Data System (ADS)
Bressanini, Dario; Fabbri, Giordano; Mella, Massimo; Morosi, Gabriele
1999-10-01
We describe how to combine the variational Monte Carlo method with a spline description of the wave function to obtain a powerful and flexible method to optimize electronic and nuclear wave functions. A property of this method is that the optimization is performed "locally": During the optimization, the attention is focused on a region of the wave function at a certain time, with little or no perturbation in far away regions. This allows a fine tuning of the wave function even in cases where there is no experience on how to choose a good functional form and a good basis set. After the optimization, the splines were fitted using more familiar analytical global functions. The flexibility of the method is shown by calculating the electronic wave function for some two and three electron systems, and the nuclear wave function for the helium trimer. For 4He3, using a two-body helium-helium potential, we obtained the best variational function to date, which allows us to estimate the exact energy with a very small variance by a diffusion Monte Carlo simulation.
Friedline, Terri; Masa, Rainier D; Chowa, Gina A N
2015-01-01
The natural log and categorical transformations commonly applied to wealth for meeting the statistical assumptions of research may not always be appropriate for adjusting for skewness given wealth's unique properties. Finding and applying appropriate transformations is becoming increasingly important as researchers consider wealth as a predictor of well-being. We present an alternative transformation-the inverse hyperbolic sine (IHS)-for simultaneously dealing with skewness and accounting for wealth's unique properties. Using the relationship between household wealth and youth's math achievement as an example, we apply the IHS transformation to wealth data from US and Ghanaian households. We also explore non-linearity and accumulation thresholds by combining IHS transformed wealth with splines. IHS transformed wealth relates to youth's math achievement similarly when compared to categorical and natural log transformations, indicating that it is a viable alternative to other transformations commonly used in research. Non-linear relationships and accumulation thresholds emerge that predict youth's math achievement when splines are incorporated. In US households, accumulating debt relates to decreases in math achievement whereas accumulating assets relates to increases in math achievement. In Ghanaian households, accumulating assets between the 25th and 50th percentiles relates to increases in youth's math achievement. PMID:25432618
Railroad inspection based on ACFM employing a non-uniform B-spline approach
NASA Astrophysics Data System (ADS)
Chacón Muñoz, J. M.; García Márquez, F. P.; Papaelias, M.
2013-11-01
The stresses sustained by rails have increased in recent years due to the use of higher train speeds and heavier axle loads. For this reason surface and near-surface defects generate by Rolling Contact Fatigue (RCF) have become particularly significant as they can cause unexpected structural failure of the rail, resulting in severe derailments. The accident that took place in Hatfield, UK (2000), is an example of a derailment caused by the structural failure of a rail section due to RCF. Early detection of RCF rail defects is therefore of paramount importance to the rail industry. The performance of existing ultrasonic and magnetic flux leakage techniques in detecting rail surface-breaking defects, such as head checks and gauge corner cracking, is inadequate during high-speed inspection, while eddy current sensors suffer from lift-off effects. The results obtained through rail inspection experiments under simulated conditions using Alternating Current Field Measurement (ACFM) probes, suggest that this technique can be applied for the accurate and reliable detection of surface-breaking defects at high inspection speeds. This paper presents the B-Spline approach used for the accurate filtering the noise of the raw ACFM signal obtained during high speed tests to improve the reliability of the measurements. A non-uniform B-spline approximation is employed to calculate the exact positions and the dimensions of the defects. This method generates a smooth approximation similar to the ACFM dataset points related to the rail surface-breaking defect.
NASA Astrophysics Data System (ADS)
Xu, ShengYong; Wu, JuanJuan; Zhu, Li; Li, WeiHao; Wang, YiTian; Wang, Na
2015-12-01
Visual navigation is a fundamental technique of intelligent cotton-picking robot. There are many components and cover in the cotton field, which make difficulties of furrow recognition and trajectory extraction. In this paper, a new field navigation path extraction method is presented. Firstly, the color image in RGB color space is pre-processed by the OTSU threshold algorithm and noise filtering. Secondly, the binary image is divided into numerous horizontally spline areas. In each area connected regions of neighboring images' vertical center line are calculated by the Two-Pass algorithm. The center points of the connected regions are candidate points for navigation path. Thirdly, a series of navigation points are determined iteratively on the principle of the nearest distance between two candidate points in neighboring splines. Finally, the navigation path equation is fitted by the navigation points using the least squares method. Experiments prove that this method is accurate and effective. It is suitable for visual navigation in the complex environment of cotton field in different phases.
3D range image resampling using B-spline surface fitting
NASA Astrophysics Data System (ADS)
Li, Songtao; Zhao, Dongming
2000-05-01
Many optical range sensors use an Equal Angle Increment (EAI) sampling. This type of sensors uses rotating mirrors with constant angular velocity for radar and triangulation techniques, where the sensor sends and receives modulated coherent light through the mirror. Such an EAI model generates data for surface geometrical description that has to be converted, in many applications, into data which meet the desired Equal Distance Increment orthographic projection model. For an accurate analysis in 3D images, an interpolation scheme is needed to resample the range data into spatially equally-distance sampling data that emulate the Cartesian orthographic projection model. In this paper, a resampling approach using a B-Spline surface fitting is proposed. The first step is to select a new scale for all X, Y, Z directions based on the 3D Cartesian coordinates of range data obtained from the sensor parameters. The size of the new range image and the new coordinates of each point are then computed. The new range value is obtained using a B-Spline surface fitting based on the new Cartesian coordinates. The experiments show that this resampling approach provides a geometrically accurate solution for many industrial applications which deploy the EAI sampling sensors.
Three-dimensional range data interpolation using B-spline surface fitting
NASA Astrophysics Data System (ADS)
Li, Songtao; Zhao, Dongming
2000-05-01
Many optical range sensors use an Equal Angle Increment (EAI) sampling. This type of sensors uses rotating mirrors with a constant angular velocity using radar and triangulation techniques, where the sensor sends and receives the modulated coherent light through the mirror. Such an EAI model generates data for surface geometrical description that has to be converted, in many applications, into data which meet the desired Equal Distance Increment orthographic projection model. For an accurate analysis in 3D images, a 3D interpolation scheme is needed to resample the range data into spatially equally-distance sampling data that emulate the Cartesian orthographic projection model. In this paper, a resampling approach using a B-Spline surface fitting is proposed. The first step is to select a new scale for all X, Y, Z directions based on the 3D Cartesian coordinates of range data obtained from the sensor parameters. The size of the new range image and the new coordinates of each point are then computed according to the actual references of (X, Y, Z) coordinates and the new scale. The new range data are interpolated using a B-Spline surface fitting based on the new Cartesian coordinates. The experiments show that this 3D interpolation approach provides a geometrically accurate solution for many industrial applications which deploy the EAI sampling sensors.
NASA Astrophysics Data System (ADS)
Marghany, Maged
2014-06-01
A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.
Ren, K; Ren-Kurc, A
1986-08-01
A new numerical method of determining the position of the inflection point of a potentiometric titration curve is presented. It consists of describing the experimental data (emf, volume data-points) by means of a rational spline function. The co-ordinates of the titration end-point are determined by analysis of the first and second derivatives of the spline function formed. The method also allows analysis of distorted titration curves which cannot be interpreted by Gran's or other computational methods. PMID:18964159
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods. PMID:27547688
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods.
NASA Astrophysics Data System (ADS)
Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui
2015-03-01
A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
Can luteal regression be reversed?
Telleria, Carlos M
2006-01-01
The corpus luteum is an endocrine gland whose limited lifespan is hormonally programmed. This debate article summarizes findings of our research group that challenge the principle that the end of function of the corpus luteum or luteal regression, once triggered, cannot be reversed. Overturning luteal regression by pharmacological manipulations may be of critical significance in designing strategies to improve fertility efficacy. PMID:17074090
Logistic Regression: Concept and Application
ERIC Educational Resources Information Center
Cokluk, Omay
2010-01-01
The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…
Wild bootstrap for quantile regression.
Feng, Xingdong; He, Xuming; Hu, Jianhua
2011-12-01
The existing theory of the wild bootstrap has focused on linear estimators. In this note, we broaden its validity by providing a class of weight distributions that is asymptotically valid for quantile regression estimators. As most weight distributions in the literature lead to biased variance estimates for nonlinear estimators of linear regression, we propose a modification of the wild bootstrap that admits a broader class of weight distributions for quantile regression. A simulation study on median regression is carried out to compare various bootstrap methods. With a simple finite-sample correction, the wild bootstrap is shown to account for general forms of heteroscedasticity in a regression model with fixed design points. PMID:23049133
[Regression grading in gastrointestinal tumors].
Tischoff, I; Tannapfel, A
2012-02-01
Preoperative neoadjuvant chemoradiation therapy is a well-established and essential part of the interdisciplinary treatment of gastrointestinal tumors. Neoadjuvant treatment leads to regressive changes in tumors. To evaluate the histological tumor response different scoring systems describing regressive changes are used and known as tumor regression grading. Tumor regression grading is usually based on the presence of residual vital tumor cells in proportion to the total tumor size. Currently, no nationally or internationally accepted grading systems exist. In general, common guidelines should be used in the pathohistological diagnostics of tumors after neoadjuvant therapy. In particularly, the standard tumor grading will be replaced by tumor regression grading. Furthermore, tumors after neoadjuvant treatment are marked with the prefix "y" in the TNM classification. PMID:22293790
Fungible weights in logistic regression.
Jones, Jeff A; Waller, Niels G
2016-06-01
In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record
Spline analysis of Holocene sediment magnetic records: Uncertainty estimates for field modeling
NASA Astrophysics Data System (ADS)
Panovska, S.; Finlay, C. C.; Donadini, F.; Hirt, A. M.
2012-02-01
Sediment and archeomagnetic data spanning the Holocene enable us to reconstruct the evolution of the geomagnetic field on time scales of centuries to millennia. In global field modeling the reliability of data is taken into account by weighting according to uncertainty estimates. Uncertainties in sediment magnetic records arise from (1) imperfections in the paleomagnetic recording processes, (2) coring and (sub) sampling methods, (3) adopted averaging procedures, and (4) uncertainties in the age-depth models. We take a step toward improved uncertainty estimates by performing a comprehensive statistical analysis of the available global database of Holocene magnetic records. Smoothing spline models that capture the robust aspects of individual records are derived. This involves a cross-validation approach, based on an absolute deviation measure of misfit, to determine the smoothing parameter for each spline model, together with the use of a minimum smoothing time derived from the sedimentation rate and assumed lock-in depth. Departures from the spline models provide information concerning the random variability in each record. Temporal resolution analysis reveals that 50% of the records have smoothing times between 80 and 250 years. We also perform comparisons among the sediment magnetic records and archeomagnetic data, as well as with predictions from the global historical and archeomagnetic field models. Combining these approaches, we arrive at individual uncertainty estimates for each sediment record. These range from 2.5° to 11.2° (median: 5.9°; interquartile range: 5.4° to 7.2°) for inclination, 4.1° to 46.9° (median: 13.4°; interquartile range: 11.4° to 18.9°) for relative declination, and 0.59 to 1.32 (median: 0.93; interquartile range: 0.86 to 1.01) for standardized relative paleointensity. These values suggest that uncertainties may have been underestimated in previous studies. No compelling evidence for systematic inclination shallowing is
Viola, Francesco; Coe, Ryan L; Owen, Kevin; Guenther, Drake A; Walker, William F
2008-12-01
Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 x 10(-4) samples in range and 2.2 x 10(-3) samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 x 10(-3) samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Martin, Fernando; Horner, Daniel A.; Vanroose, Wim; Rescigno,Thomas N.; McCurdy, C. William
2005-11-04
We report a fully ab initio implementation of exterior complex scaling in B-splines to evaluate total, singly and triply differential cross sections in double photoionization problems. Results for He and H{sub 2} double photoionization are presented and compared with experiment.
NASA Technical Reports Server (NTRS)
1978-01-01
The component testing of a ball spline variable pitch mechanism is described including a whirligig test. The variable pitch actuator successfully completed all planned whirligig tests including a fifty cycle endurance test at actuation rates up to 125 deg per second at up to 102 percent fan speed (3400 rpm).
An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.
2015-04-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume
Multiple Regression and Its Discontents
ERIC Educational Resources Information Center
Snell, Joel C.; Marsh, Mitchell
2012-01-01
Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.
Heyne, Matthias; Derrick, Donald
2015-12-01
Tongue surface measurements from midsagittal ultrasound scans are effectively arcs with deviations representing tongue shape, but smoothing-spline analysis of variances (SSANOVAs) assume variance around a horizontal line. Therefore, calculating SSANOVA average curves of tongue traces in Cartesian Coordinates [Davidson, J. Acoust. Soc. Am. 120(1), 407-415 (2006)] creates errors that are compounded at tongue tip and root where average tongue shape deviates most from a horizontal line. This paper introduces a method for transforming data into polar coordinates similar to the technique by Mielke [J. Acoust. Soc. Am. 137(5), 2858-2869 (2015)], but using the virtual origin of a radial ultrasound transducer as the polar origin-allowing data conversion in a manner that is robust against between-subject and between-session variability.
NASA Astrophysics Data System (ADS)
Fernandes, Ryan I.; Fairweather, Graeme
2012-08-01
An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.
An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.
Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming
2016-01-01
The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases.
An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.
Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming
2016-01-01
The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903
FAST TRACK COMMUNICATION: From cardinal spline wavelet bases to highly coherent dictionaries
NASA Astrophysics Data System (ADS)
Andrle, Miroslav; Rebollo-Neira, Laura
2008-05-01
Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation.
Full-turn symplectic map from a generator in a Fourier-spline basis
Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.
1993-04-01
Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton`s method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10{sup 7} turns on an IBM RS6000 model 320 workstation, in a run of about one day.
COLLINARUS: collection of image-derived non-linear attributes for registration using splines
NASA Astrophysics Data System (ADS)
Chappelow, Jonathan; Bloch, B. Nicolas; Rofsky, Neil; Genega, Elizabeth; Lenkinski, Robert; DeWolf, William; Viswanath, Satish; Madabhushi, Anant
2009-02-01
We present a new method for fully automatic non-rigid registration of multimodal imagery, including structural and functional data, that utilizes multiple texutral feature images to drive an automated spline based non-linear image registration procedure. Multimodal image registration is significantly more complicated than registration of images from the same modality or protocol on account of difficulty in quantifying similarity between different structural and functional information, and also due to possible physical deformations resulting from the data acquisition process. The COFEMI technique for feature ensemble selection and combination has been previously demonstrated to improve rigid registration performance over intensity-based MI for images of dissimilar modalities with visible intensity artifacts. Hence, we present here the natural extension of feature ensembles for driving automated non-rigid image registration in our new technique termed Collection of Image-derived Non-linear Attributes for Registration Using Splines (COLLINARUS). Qualitative and quantitative evaluation of the COLLINARUS scheme is performed on several sets of real multimodal prostate images and synthetic multiprotocol brain images. Multimodal (histology and MRI) prostate image registration is performed for 6 clinical data sets comprising a total of 21 groups of in vivo structural (T2-w) MRI, functional dynamic contrast enhanced (DCE) MRI, and ex vivo WMH images with cancer present. Our method determines a non-linear transformation to align WMH with the high resolution in vivo T2-w MRI, followed by mapping of the histopathologic cancer extent onto the T2-w MRI. The cancer extent is then mapped from T2-w MRI onto DCE-MRI using the combined non-rigid and affine transformations determined by the registration. Evaluation of prostate registration is performed by comparison with the 3 time point (3TP) representation of functional DCE data, which provides an independent estimate of cancer
B-spline modal method: a polynomial approach compared to the Fourier modal method.
Walz, Michael; Zebrowski, Thomas; Küchenmeister, Jens; Busch, Kurt
2013-06-17
A detailed analysis of the B-spline Modal Method (BMM) for one- and two-dimensional diffraction gratings and a comparison to the Fourier Modal Method (FMM) is presented. Owing to its intrinsic capability to accurately resolve discontinuities, BMM avoids the notorious problems of FMM that are associated with the Gibbs phenomenon. As a result, BMM facilitates significantly more efficient eigenmode computations. With regard to BMM-based transmission and reflection computations, it is demonstrated that a novel Galerkin approach (in conjunction with a scattering-matrix algorithm) allows for an improved field matching between different layers. This approach is superior relative to the traditional point-wise field matching. Moreover, only this novel Galerkin approach allows for an competitive extension of BMM to the case of two-dimensional diffraction gratings. These improvements will be very useful for high-accuracy grating computations in general and for the analysis of associated electromagnetic field profiles in particular.
Robust engineering design optimization with non-uniform rational B-splines-based metamodels
NASA Astrophysics Data System (ADS)
Steuben, John C.; Turner, Cameron J.; Crawford, Richard H.
2013-07-01
Non-uniform rational B-splines (NURBs) demonstrate properties that make them attractive as metamodels, or surrogate models, for engineering design purposes. Previous research has resulted in the development of algorithms capable of fitting NURBs-based metamodels to engineering design spaces, and optimizing these models. This article presents an approach to robust optimization that employs NURBs-based metamodels. This robust optimization technique exploits the unique structure of NURBs-based metamodels to derive a simple but effective robustness metric. An algorithm is demonstrated that uses this metric to weigh robustness against optimality, and visualizes the trade-offs between these metamodel properties. This approach is demonstrated with test problems of increasing dimensionality, including several practical design challenges.
Central-force decomposition of spline-based modified embedded atom method potential
NASA Astrophysics Data System (ADS)
Winczewski, S.; Dziedzic, J.; Rybicki, J.
2016-10-01
Central-force decompositions are fundamental to the calculation of stress fields in atomic systems by means of Hardy stress. We derive expressions for a central-force decomposition of the spline-based modified embedded atom method (s-MEAM) potential. The expressions are subsequently simplified to a form that can be readily used in molecular-dynamics simulations, enabling the calculation of the spatial distribution of stress in systems treated with this novel class of empirical potentials. We briefly discuss the properties of the obtained decomposition and highlight further computational techniques that can be expected to benefit from the results of this work. To demonstrate the practicability of the derived expressions, we apply them to calculate stress fields due to an edge dislocation in bcc Mo, comparing their predictions to those of linear elasticity theory.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline
Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming
2016-01-01
The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903
Federico, Alejandro; Kaufmann, Guillermo H
2005-05-10
We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.
Regression methods for spatial data
NASA Technical Reports Server (NTRS)
Yakowitz, S. J.; Szidarovszky, F.
1982-01-01
The kriging approach, a parametric regression method used by hydrologists and mining engineers, among others also provides an error estimate the integral of the regression function. The kriging method is explored and some of its statistical characteristics are described. The Watson method and theory are extended so that the kriging features are displayed. Theoretical and computational comparisons of the kriging and Watson approaches are offered.
Wrong Signs in Regression Coefficients
NASA Technical Reports Server (NTRS)
McGee, Holly
1999-01-01
When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
NASA Astrophysics Data System (ADS)
Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.
2009-12-01
A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.
Using Spline Functions for the Shape Description of the Surface of Shell Structures
NASA Astrophysics Data System (ADS)
Lenda, Grzegorz
2014-12-01
The assessment of the cover shape of shell structures makes an important issue both from the point of view of safety, as well as functionality of the construction. The most numerous group among this type of constructions are objects having the shape of a quadric (cooling towers, tanks with gas and liquids, radio-telescope dishes etc.). The material from observation of these objects (point sets), collected during periodic measurements is usually converted into a continuous form in the process of approximation, with the use of the quadric surface. The created models, are then applied in the assessment of the deformation of surface in the given period of time. Such a procedure has, however, some significant limitations. The approximation with the use of quadrics, allows the determination of basic dimensions and location of the construction, however it results in ideal objects, not providing any information on local surface deformations. They can only be defined by comparison of the model with the point set of observations. If the periodic measurements are carried out in independent, separate points, then it will be impossible to define the existing deformations directly. The second problem results from the one-equation character of the ideal approximation model. Real deformations of the object change its basic parameters, inter alia the lengths of half-axis of main quadrics. The third problem appears when the construction is not a quadric; no information on the equation describing its shape is available either. Accepting wrong kind of approximation function, causes the creation of a model of large deviations from the observed points. All the mentioned above inconveniences can be avoided by applying splines to the shape description of the surface of shell structures. The use of the function of this type, however, comes across other types of limitations. This study deals with the above subject, presenting several methods allowing the increase of accuracy and decrease of
Castillo, Victor Manuel
1999-01-01
A collocation method using cubic splines is developed and applied to simulate steady and time-dependent, including turbulent, thermally convecting flows for two-dimensional compressible fluids. The state variables and the fluxes of the conserved quantities are approximated by cubic splines in both space direction. This method is shown to be numerically conservative and to have a local truncation error proportional to the fourth power of the grid spacing. A ''dual-staggered'' Cartesian grid, where energy and momentum are updated on one grid and mass density on the other, is used to discretize the flux form of the compressible Navier-Stokes equations. Each grid-line is staggered so that the fluxes, in each direction, are calculated at the grid midpoints. This numerical method is validated by simulating thermally convecting flows, from steady to turbulent, reproducing known results. Once validated, the method is used to investigate many aspects of thermal convection with high numerical accuracy. Simulations demonstrate that multiple steady solutions can coexist at the same Rayleigh number for compressible convection. As a system is driven further from equilibrium, a drop in the time-averaged dimensionless heat flux (and the dimensionless internal entropy production rate) occurs at the transition from laminar-periodic to chaotic flow. This observation is consistent with experiments of real convecting fluids. Near this transition, both harmonic and chaotic solutions may exist for the same Rayleigh number. The chaotic flow loses phase-space information at a greater rate, while the periodic flow transports heat (produces entropy) more effectively. A linear sum of the dimensionless forms of these rates connects the two flow morphologies over the entire range for which they coexist. For simulations of systems with higher Rayleigh numbers, a scaling relation exists relating the dimensionless heat flux to the two-seventh's power of the Rayleigh number, suggesting the
Regression Discontinuity Designs in Epidemiology
Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till
2014-01-01
When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922
Bignardi, A B; El Faro, L; Torres Júnior, R A A; Cardoso, V L; Machado, P F; Albuquerque, L G
2011-01-01
We analyzed 152,145 test-day records from 7317 first lactations of Holstein cows recorded from 1995 to 2003. Our objective was to model variations in test-day milk yield during the first lactation of Holstein cows by random regression model (RRM), using various functions in order to obtain adequate and parsimonious models for the estimation of genetic parameters. Test-day milk yields were grouped into weekly classes of days in milk, ranging from 1 to 44 weeks. The contemporary groups were defined as herd-test-day. The analyses were performed using a single-trait RRM, including the direct additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. The mean trend of milk yield was modeled with a fourth-order orthogonal Legendre polynomial. The additive genetic and permanent environmental covariance functions were estimated by random regression on two parametric functions, Ali and Schaeffer and Wilmink, and on B-spline functions of days in milk. The covariance components and the genetic parameters were estimated by the restricted maximum likelihood method. Results from RRM parametric and B-spline functions were compared to RRM on Legendre polynomials and with a multi-trait analysis, using the same data set. Heritability estimates presented similar trends during mid-lactation (13 to 31 weeks) and between week 37 and the end of lactation, for all RRM. Heritabilities obtained by multi-trait analysis were of a lower magnitude than those estimated by RRM. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. RRM using B-spline and Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data.
Adaptive Multilinear Tensor Product Wavelets.
Weiss, Kenneth; Lindstrom, Peter
2016-01-01
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.
Recursive bias estimation for high dimensional regression smoothers
Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric
2009-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
Embedded Sensors for Measuring Surface Regression
NASA Technical Reports Server (NTRS)
Gramer, Daniel J.; Taagen, Thomas J.; Vermaak, Anton G.
2006-01-01
The development and evaluation of new hybrid and solid rocket motors requires accurate characterization of the propellant surface regression as a function of key operational parameters. These characteristics establish the propellant flow rate and are prime design drivers affecting the propulsion system geometry, size, and overall performance. There is a similar need for the development of advanced ablative materials, and the use of conventional ablatives exposed to new operational environments. The Miniature Surface Regression Sensor (MSRS) was developed to serve these applications. It is designed to be cast or embedded in the material of interest and regresses along with it. During this process, the resistance of the sensor is related to its instantaneous length, allowing the real-time thickness of the host material to be established. The time derivative of this data reveals the instantaneous surface regression rate. The MSRS could also be adapted to perform similar measurements for a variety of other host materials when it is desired to monitor thicknesses and/or regression rate for purposes of safety, operational control, or research. For example, the sensor could be used to monitor the thicknesses of brake linings or racecar tires and indicate when they need to be replaced. At the time of this reporting, over 200 of these sensors have been installed into a variety of host materials. An MSRS can be made in either of two configurations, denoted ladder and continuous (see Figure 1). A ladder MSRS includes two highly electrically conductive legs, across which narrow strips of electrically resistive material are placed at small increments of length. These strips resemble the rungs of a ladder and are electrically equivalent to many tiny resistors connected in parallel. A substrate material provides structural support for the legs and rungs. The instantaneous sensor resistance is read by an external signal conditioner via wires attached to the conductive legs on the
Eckhard, Timo; Eckhard, Jia; Valero, Eva M; Nieves, Juan Luis
2014-06-10
In spectral imaging, spatial and spectral information of an image scene are combined. There exist several technologies that allow the acquisition of this kind of data. Depending on the optical components used in the spectral imaging systems, misalignment between image channels can occur. Further, the projection of some systems deviates from that of a perfect optical lens system enough that a distortion of scene content in the images becomes apparent to the observer. Correcting distortion and misalignment can be complicated for spectral image data if they are different at each image channel. In this work, we propose an image registration and distortion correction scheme for spectral image cubes that is based on a free-form deformation model of uniform cubic B-splines with multilevel grid refinement. This scheme is adaptive with respect to image size, degree of misalignment, and degree of distortion, and in that sense is superior to previous approaches. We support our proposed scheme with empirical data from a Bragg-grating-based hyperspectral imager, for which a registration accuracy of approximately one pixel was achieved. PMID:24921143
Interpretation of Standardized Regression Coefficients in Multiple Regression.
ERIC Educational Resources Information Center
Thayer, Jerome D.
The extent to which standardized regression coefficients (beta values) can be used to determine the importance of a variable in an equation was explored. The beta value and the part correlation coefficient--also called the semi-partial correlation coefficient and reported in squared form as the incremental "r squared"--were compared for variables…
Laplace regression with censored data.
Bottai, Matteo; Zhang, Jiajia
2010-08-01
We consider a regression model where the error term is assumed to follow a type of asymmetric Laplace distribution. We explore its use in the estimation of conditional quantiles of a continuous outcome variable given a set of covariates in the presence of random censoring. Censoring may depend on covariates. Estimation of the regression coefficients is carried out by maximizing a non-differentiable likelihood function. In the scenarios considered in a simulation study, the Laplace estimator showed correct coverage and shorter computation time than the alternative methods considered, some of which occasionally failed to converge. We illustrate the use of Laplace regression with an application to survival time in patients with small cell lung cancer.
Survival Data and Regression Models
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.
Interquantile Shrinkage in Regression Models
Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.
2012-01-01
Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546
[Is regression of atherosclerosis possible?].
Thomas, D; Richard, J L; Emmerich, J; Bruckert, E; Delahaye, F
1992-10-01
Experimental studies have shown the regression of atherosclerosis in animals given a cholesterol-rich diet and then given a normal diet or hypolipidemic therapy. Despite favourable results of clinical trials of primary prevention modifying the lipid profile, the concept of atherosclerosis regression in man remains very controversial. The methodological approach is difficult: this is based on angiographic data and requires strict standardisation of angiographic views and reliable quantitative techniques of analysis which are available with image processing. Several methodologically acceptable clinical coronary studies have shown not only stabilisation but also regression of atherosclerotic lesions with reductions of about 25% in total cholesterol levels and of about 40% in LDL cholesterol levels. These reductions were obtained either by drugs as in CLAS (Cholesterol Lowering Atherosclerosis Study), FATS (Familial Atherosclerosis Treatment Study) and SCOR (Specialized Center of Research Intervention Trial), by profound modifications in dietary habits as in the Lifestyle Heart Trial, or by surgery (ileo-caecal bypass) as in POSCH (Program On the Surgical Control of the Hyperlipidemias). On the other hand, trials with non-lipid lowering drugs such as the calcium antagonists (INTACT, MHIS) have not shown significant regression of existing atherosclerotic lesions but only a decrease on the number of new lesions. The clinical benefits of these regression studies are difficult to demonstrate given the limited period of observation, relatively small population numbers and the fact that in some cases the subjects were asymptomatic. The decrease in the number of cardiovascular events therefore seems relatively modest and concerns essentially subjects who were symptomatic initially. The clinical repercussion of studies of prevention involving a single lipid factor is probably partially due to the reduction in progression and anatomical regression of the atherosclerotic plaque
Correlation Weights in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.; Jones, Jeff A.
2010-01-01
A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…
Weighting Regressions by Propensity Scores
ERIC Educational Resources Information Center
Freedman, David A.; Berk, Richard A.
2008-01-01
Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…
Multiple Regression: A Leisurely Primer.
ERIC Educational Resources Information Center
Daniel, Larry G.; Onwuegbuzie, Anthony J.
Multiple regression is a useful statistical technique when the researcher is considering situations in which variables of interest are theorized to be multiply caused. It may also be useful in those situations in which the researchers is interested in studies of predictability of phenomena of interest. This paper provides an introduction to…
Cactus: An Introduction to Regression
ERIC Educational Resources Information Center
Hyde, Hartley
2008-01-01
When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…
Ridge Regression for Interactive Models.
ERIC Educational Resources Information Center
Tate, Richard L.
1988-01-01
An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are favorable to…
Quantile Regression with Censored Data
ERIC Educational Resources Information Center
Lin, Guixian
2009-01-01
The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…
Logistic regression: a brief primer.
Stoltzfus, Jill C
2011-10-01
Regression techniques are versatile in their application to medical research because they can measure associations, predict outcomes, and control for confounding variable effects. As one such technique, logistic regression is an efficient and powerful way to analyze the effect of a group of independent variables on a binary outcome by quantifying each independent variable's unique contribution. Using components of linear regression reflected in the logit scale, logistic regression iteratively identifies the strongest linear combination of variables with the greatest probability of detecting the observed outcome. Important considerations when conducting logistic regression include selecting independent variables, ensuring that relevant assumptions are met, and choosing an appropriate model building strategy. For independent variable selection, one should be guided by such factors as accepted theory, previous empirical investigations, clinical considerations, and univariate statistical analyses, with acknowledgement of potential confounding variables that should be accounted for. Basic assumptions that must be met for logistic regression include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers. Additionally, there should be an adequate number of events per independent variable to avoid an overfit model, with commonly recommended minimum "rules of thumb" ranging from 10 to 20 events per covariate. Regarding model building strategies, the three general types are direct/standard, sequential/hierarchical, and stepwise/statistical, with each having a different emphasis and purpose. Before reaching definitive conclusions from the results of any of these methods, one should formally quantify the model's internal validity (i.e., replicability within the same data set) and external validity (i.e., generalizability beyond the current sample). The resulting logistic regression model
Birchler, W.D.; Schilling, S.A.
2001-02-01
The purpose of this report is to demonstrate that modern computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems can be used in the Department of Energy (DOE) Nuclear Weapons Complex (NWC) to design new and remodel old products, fabricate old and new parts, and reproduce legacy data within the inspection uncertainty limits. In this study, two two-dimensional splines are compared with several modern CAD curve-fitting modeling algorithms. The first curve-fitting algorithm is called the Wilson-Fowler Spline (WFS), and the second is called a parametric cubic spline (PCS). Modern CAD systems usually utilize either parametric cubic and/or B-splines.
Morrissey, Edward R; Juárez, Miguel A; Denby, Katherine J; Burroughs, Nigel J
2011-10-01
We propose a semiparametric Bayesian model, based on penalized splines, for the recovery of the time-invariant topology of a causal interaction network from longitudinal data. Our motivation is inference of gene regulatory networks from low-resolution microarray time series, where existence of nonlinear interactions is well known. Parenthood relations are mapped by augmenting the model with kinship indicators and providing these with either an overall or gene-wise hierarchical structure. Appropriate specification of the prior is crucial to control the flexibility of the splines, especially under circumstances of scarce data; thus, we provide an informative, proper prior. Substantive improvement in network inference over a linear model is demonstrated using synthetic data drawn from ordinary differential equation models and gene expression from an experimental data set of the Arabidopsis thaliana circadian rhythm.
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program
NASA Astrophysics Data System (ADS)
Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.
2006-03-01
In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.
Ahn, W.; Anderson, K.S.; De, S.
2013-01-01
An interpolating spline-based approach is presented for modeling multi-flexible-body systems in the divide-and-conquer (DCA) scheme. This algorithm uses the floating frame of reference formulation and piecewise spline functions to construct and solve the non-linear equations of motion of the multi-flexible-body system undergoing large rotations and translations. The new approach is compared with the flexible DCA (FDCA) that uses the assumed modes method [1]. The FDCA, in many cases, must resort to sub-structuring to accurately model the deformation of the system. We demonstrate, through numerical examples, that the interpolating spline-based approach is comparable in accuracy and superior in efficiency to the FDCA. The present approach is appropriate for modeling flexible mechanisms with thin 1D bodies undergoing large rotations and translations, including those with irregular shapes. As such, the present approach extends the current capability of the DCA to model deformable systems. The algorithm retains the theoretical logarithmic complexity inherent in the DCA when implemented in parallel. PMID:24124265
SINGH, G. D.; McNAMARA JR, J. A.; LOZANOFF, S.
1997-01-01
This study determines deformations of the midface that contribute to a class III appearance, employing thin-plate spline analysis. A total of 135 lateral cephalographs of prepubertal children of European-American descent with either class III malocclusions or a class I molar occlusion were compared. The cephalographs were traced and checked, and 7 homologous landmarks of the midface were identified and digitised. The data sets were scaled to an equivalent size and subjected to Procrustes analysis. These statistical tests indicated significant differences (P<0.05) between the averaged class I and class III morphologies. Thin-plate spline analysis indicated that both affine and nonaffine transformations contribute towards the total spline for the averaged midfacial configuration. For nonaffine transformations, partial warp 3 had the highest magnitude, indicating the large scale deformations of the midfacial configuration. These deformations affected the palatal landmarks, and were associated with compression of the midfacial complex in the anteroposterior plane predominantly. Partial warp 4 produced some vertical compression of the posterior aspect of the midfacial complex whereas partial warps 1 and 2 indicated localised shape changes of the maxillary alveolus region. Large spatial-scale deformations therefore affect the midfacial complex in an anteroposterior axis, in combination with vertical compression and localised distortions. These deformations may represent a developmental diminution of the palatal complex anteroposteriorly that, allied with vertical shortening of midfacial height posteriorly, results in class III malocclusions with a retrusive midfacial profile. PMID:9449078
NASA Astrophysics Data System (ADS)
Liu, Yutong; Uberti, Mariano; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael D.
2009-02-01
Coregistration of in vivo magnetic resonance imaging (MRI) with histology provides validation of disease biomarker and pathobiology studies. Although thin-plate splines are widely used in such image registration, point landmark selection is error prone and often time-consuming. We present a technique to optimize landmark selection for thin-plate splines and demonstrate its usefulness in warping rodent brain MRI to histological sections. In this technique, contours are drawn on the corresponding MRI slices and images of histological sections. The landmarks are extracted from the contours by equal spacing then optimized by minimizing a cost function consisting of the landmark displacement and contour curvature. The technique was validated using simulation data and brain MRI-histology coregistration in a murine model of HIV-1 encephalitis. Registration error was quantified by calculating target registration error (TRE). The TRE of approximately 8 pixels for 20-80 landmarks without optimization was stable at different landmark numbers. The optimized results were more accurate at low landmark numbers (TRE of approximately 2 pixels for 50 landmarks), while the accuracy decreased (TRE approximately 8 pixels for larger numbers of landmarks (70- 80). The results demonstrated that registration accuracy decreases with the increasing landmark numbers offering more confidence in MRI-histology registration using thin-plate splines.
The Immune Response to Papillomavirus During Infection Persistence and Regression
Hibma, Merilyn H
2012-01-01
Human papillomavirus (HPV) infections cause a significant global health burden, predominantly due to HPV-associated cancers. HPV infects only the epidermal cells of cutaneous and mucosal skin, without penetration into the dermal tissues. Infections may persist for months or years, contributed by an array of viral immune evasion mechanisms. However in the majority of cases immunity-based regression of HPV lesions does eventually occur. The role of the innate immune response to HPV in persistence and regression of HPV infection is not well understood. Although an initial inflammatory infiltrate may contribute to disease regression, sustained inflammation at the HPV-induced lesions, characterized by macrophage and neutrophil infiltration, has been observed in persistence. Pathogen-associated molecular patterns (PAMPs) are important in innate recognition. The double stranded DNA and an L1 and L2 capsid components of the HPV virion are potential PAMPs that can trigger signaling through cellular pattern recognition receptors, including toll-like receptors (TLR). TLR expression is increased in regressing HPV disease but is reduced in persistent lesions, suggesting a role for TLR in HPV regression. With regard to the adaptive immune response, a key indicator of regression in humans is infiltration of the lesion with both CD4 and CD8 T cells. In individuals with persistent lesions, CD8 T cell and immune suppressive regulatory T cells (Tregs) infiltrate the infection site. There is no association between persistence or regression and the presence of serum antibodies to the viral capsid antigens of HPV. There is still much to be learned about the immunological events that trigger regression of HPV disease. Understanding the viral and host factors that influence persistence and regression is important for the development of better immunotherapeutic treatments for HPV-associated disease. PMID:23341859
CT segmentation of dental shapes by anatomy-driven reformation imaging and B-spline modelling.
Barone, S; Paoli, A; Razionale, A V
2016-06-01
Dedicated imaging methods are among the most important tools of modern computer-aided medical applications. In the last few years, cone beam computed tomography (CBCT) has gained popularity in digital dentistry for 3D imaging of jawbones and teeth. However, the anatomy of a maxillofacial region complicates the assessment of tooth geometry and anatomical location when using standard orthogonal views of the CT data set. In particular, a tooth is defined by a sub-region, which cannot be easily separated from surrounding tissues by only considering pixel grey-intensity values. For this reason, an image enhancement is usually necessary in order to properly segment tooth geometries. In this paper, an anatomy-driven methodology to reconstruct individual 3D tooth anatomies by processing CBCT data is presented. The main concept is to generate a small set of multi-planar reformation images along significant views for each target tooth, driven by the individual anatomical geometry of a specific patient. The reformation images greatly enhance the clearness of the target tooth contours. A set of meaningful 2D tooth contours is extracted and used to automatically model the overall 3D tooth shape through a B-spline representation. The effectiveness of the methodology has been verified by comparing some anatomy-driven reconstructions of anterior and premolar teeth with those obtained by using standard tooth segmentation tools. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26418417
Four-dimensional B-spline-based motion analysis of tagged cardiac MR images
NASA Astrophysics Data System (ADS)
Ozturk, Cengizhan; McVeigh, Elliot R.
1999-05-01
In recent years, with development of new MRI techniques, noninvasive evaluation of global and regional cardiac function is becoming a reality. One of the methods used for this purpose is MRI tagging. In tagging, spatially encoded magnetic saturation planes, tags, are created within tissues. These act as temporary markers and move with the tissue. In cardiac tagging, tag deformation pattern provides useful qualitative and quantitative information about the functional properties of underlying myocardium. The measured deformation of a single tag plane contains only unidirectional information of the past motion. In order to track the motion of a cardiac material point, this sparse, single dimensional data has to be combined with similar information gathered from other tag sets and all time frames. Previously, several methods have been developed which rely on the specific geometry of the chambers. Here, we employ an image plane based, simple cartesian coordinate system and provide a stepwise method to describe the heart motion using a four-dimensional tensor product of B-splines. The proposed displacement and forward motion fields exhibited sub-pixel accuracy. Since our motion fields are parametric and based on an image plane based coordinate system, trajectories or other derived values (velocity, acceleration, strains...) can be calculated for any desired point on the MRI images. This method is sufficiently general so that the motion of any tagged structure can be tracked.
NASA Astrophysics Data System (ADS)
Martinez, Leslie A.; Narea, Freddy J.; Cedeño, Fernando; Muñoz, Aaron A.; Reigosa, Aldo; Bravo, Kelly
2013-11-01
The noninvasive optical techniques have attracted considerable interest in recent years, because these techniques provide lot of information on the structure and composition of biological tissues more quickly and painlessly, in this study classifies the degrees of histological differentiation of neoplastic tissue of the breast in white adipose tissue samples through numerical pametrización of the diffuse reflection spectra using the Fourier series approximation. The white adipose tissue is irradiated with the spectrophotometer MiniScan XEplus and it from a mastectomy of patients with aged 38 and 50 who have a cancer lesion in the breast. The samples were provided by the pathologist with theirs medical report, it which we indicate the histological grade of tumor. We performed a parameterization algorithm where the classification criterion is the modulus of the minimum difference between the numerical approximation coefficients ai and average numerical approximation coefficients obtained for each histological grade ¯ al. Is confirmed that the cubic spline interpolation this low-power computing lets classified into histological grades with 91% certainty the tissues under study from |ai - ¯ al|
Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping
2003-05-01
In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.
Spline-based image-to-volume registration for three-dimensional electron microscopy.
Jonić, S; Sorzano, C O S; Thévenaz, P; El-Bez, C; De Carlo, S; Unser, M
2005-07-01
This paper presents an algorithm based on a continuous framework for a posteriori angular and translational assignment in three-dimensional electron microscopy (3DEM) of single particles. Our algorithm can be used advantageously to refine the assignment of standard quantized-parameter methods by registering the images to a reference 3D particle model. We achieve the registration by employing a gradient-based iterative minimization of a least-squares measure of dissimilarity between an image and a projection of the volume in the Fourier transform (FT) domain. We compute the FT of the projection using the central-slice theorem (CST). To compute the gradient accurately, we take advantage of a cubic B-spline model of the data in the frequency domain. To improve the robustness of the algorithm, we weight the cost function in the FT domain and apply a "mixed" strategy for the assignment based on the minimum value of the cost function at registration for several different initializations. We validate our algorithm in a fully controlled simulation environment. We show that the mixed strategy improves the assignment accuracy; on our data, the quality of the angular and translational assignment was better than 2 voxel (i.e., 6.54 angstroms). We also test the performance of our algorithm on real EM data. We conclude that our algorithm outperforms a standard projection-matching refinement in terms of both consistency of 3D reconstructions and speed. PMID:15885434
Quartic B-spline collocation method applied to Korteweg de Vries equation
NASA Astrophysics Data System (ADS)
Zin, Shazalina Mat; Majid, Ahmad Abd; Ismail, Ahmad Izani Md
2014-07-01
The Korteweg de Vries (KdV) equation is known as a mathematical model of shallow water waves. The general form of this equation is ut+ɛuux+μuxxx = 0 where u(x,t) describes the elongation of the wave at displacement x and time t. In this work, one-soliton solution for KdV equation has been obtained numerically using quartic B-spline collocation method for displacement x and using finite difference approach for time t. Two problems have been identified to be solved. Approximate solutions and errors for these two test problems were obtained for different values of t. In order to look into accuracy of the method, L2-norm and L∞-norm have been calculated. Mass, energy and momentum of KdV equation have also been calculated. The results obtained show the present method can approximate the solution very well, but as time increases, L2-norm and L∞-norm are also increase.
Characteristics method with cubic-spline interpolation for open channel flow computation
NASA Astrophysics Data System (ADS)
Tsai, Tung-Lin; Chiang, Shih-Wei; Yang, Jinn-Chuang
2004-10-01
In the framework of the specified-time-interval scheme, the accuracy of the characteristic method is greatly related to the form of the interpolation. The linear interpolation was commonly used to couple the characteristics method (LI method) in open channel flow computation. The LI method is easy to implement, but it leads to an inevitable smoothing of the solution. The characteristics method with the Hermite cubic interpolation (HP method, originally developed by Holly and Preissmann, 1977) was then proposed to largely reduce the error induced by the LI method. In this paper, the cubic-spline interpolation on the space line or on the time line is employed to integrate with characteristics method (CS method) for unsteady flow computation in open channel. Two hypothetical examples, including gradually and rapidly varied flows, are used to examine the applicability of the CS method as compared with the LI method, the HP method, and the analytical solutions. The simulated results show that the CS method is comparable to the HP method and more accurate than the LI method. Without tackling the additional equations for spatial or temporal derivatives, the CS method is easier to implement and more efficient than the HP method.
Merging quantum-chemistry with B-splines to describe molecular photoionization
NASA Astrophysics Data System (ADS)
Argenti, L.; Marante, C.; Klinker, M.; Corral, I.; Gonzalez, J.; Martin, F.
2016-05-01
Theoretical description of observables in attosecond pump-probe experiments requires a good representation of the system's ionization continuum. For polyelectronic atoms and molecules, however, this is still a challenge, due to the complicated short-range structure of correlated electronic wavefunctions. Whereas quantum chemistry packages (QCP) implementing sophisticated methods to compute bound electronic molecular states are well established, comparable tools for the continuum are not widely available yet. To tackle this problem, we have developed a new approach that, by means of a hybrid Gaussian-B-spline basis, interfaces existing QCPs with close-coupling scattering methods. To illustrate the viability of this approach, we report results for the multichannel ionization of the helium atom and of the hydrogen molecule that are in excellent agreement with existing accurate benchmarks. These findings, together with the flexibility of QCPs, make of this approach a good candidate for the theoretical study of the ionization of poly-electronic systems. FP7/ERC Grant XCHEM 290853.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
Quantile Regression With Measurement Error
Wei, Ying; Carroll, Raymond J.
2010-01-01
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802
Precision and Recall for Regression
NASA Astrophysics Data System (ADS)
Torgo, Luis; Ribeiro, Rita
Cost sensitive prediction is a key task in many real world applications. Most existing research in this area deals with classification problems. This paper addresses a related regression problem: the prediction of rare extreme values of a continuous variable. These values are often regarded as outliers and removed from posterior analysis. However, for many applications (e.g. in finance, meteorology, biology, etc.) these are the key values that we want to accurately predict. Any learning method obtains models by optimizing some preference criteria. In this paper we propose new evaluation criteria that are more adequate for these applications. We describe a generalization for regression of the concepts of precision and recall often used in classification. Using these new evaluation metrics we are able to focus the evaluation of predictive models on the cases that really matter for these applications. Our experiments indicate the advantages of the use of these new measures when comparing predictive models in the context of our target applications.
On distributed wavefront reconstruction for large-scale adaptive optics systems.
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
Unification of regression-based methods for the analysis of natural selection.
Morrissey, Michael B; Sakrejda, Krzysztof
2013-07-01
Regression analyses are central to characterization of the form and strength of natural selection in nature. Two common analyses that are currently used to characterize selection are (1) least squares-based approximation of the individual relative fitness surface for the purpose of obtaining quantitatively useful selection gradients, and (2) spline-based estimation of (absolute) fitness functions to obtain flexible inference of the shape of functions by which fitness and phenotype are related. These two sets of methodologies are often implemented in parallel to provide complementary inferences of the form of natural selection. We unify these two analyses, providing a method whereby selection gradients can be obtained for a given observed distribution of phenotype and characterization of a function relating phenotype to fitness. The method allows quantitatively useful selection gradients to be obtained from analyses of selection that adequately model nonnormal distributions of fitness, and provides unification of the two previously separate regression-based fitness analyses. We demonstrate the method by calculating directional and quadratic selection gradients associated with a smooth regression-based generalized additive model of the relationship between neonatal survival and the phenotypic traits of gestation length and birth mass in humans.
NASA Astrophysics Data System (ADS)
Erdogan, Eren; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte
2015-04-01
In May 2014 DGFI-TUM (the former DGFI) and the German Space Situational Awareness Centre (GSSAC) started to develop an OPerational Tool for Ionospheric Mapping And Prediction (OPTIMAP); since November 2014 the Institute of Astrophysics at the University of Göttingen (IAG) joined the group as the third partner. This project aims on the computation and prediction of maps of the vertical total electron content (VTEC) and the electron density distribution of the ionosphere on a global scale from both various space-geodetic observation techniques such as GNSS and satellite altimetry as well as Sun observations. In this contribution we present first results, i.e. a near-real time processing framework for generating VTEC maps by assimilating GNSS (GPS, GLONASS) based ionospheric data into a two-dimensional global B-spline approach. To be more specific, the spatial variations of VTEC are modelled by trigonometric B-spline functions in longitude and by endpoint-interpolating polynomial B-spline functions in latitude, respectively. Since B-spline functions are compactly supported and highly localizing our approach can handle large data gaps appropriately and, thus, provides a better approximation of data with heterogeneous density and quality compared to the commonly used spherical harmonics. The presented method models temporal variations of VTEC inside a Kalman filter. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. To approximate the temporal variation of these state vector components as part of the filter the dynamical model has to be set up. The current implementation of the filter allows to select between a random walk process, a Gauss-Markov process and a dynamic process driven by an empirical ionosphere model, e.g. the International Reference Ionosphere (IRI). For running the model ionospheric input data is acquired from terrestrial GNSS networks through online archive systems
HIGH RESOLUTION FOURIER ANALYSIS WITH AUTO-REGRESSIVE LINEAR PREDICTION
Barton, J.; Shirley, D.A.
1984-04-01
Auto-regressive linear prediction is adapted to double the resolution of Angle-Resolved Photoemission Extended Fine Structure (ARPEFS) Fourier transforms. Even with the optimal taper (weighting function), the commonly used taper-and-transform Fourier method has limited resolution: it assumes the signal is zero beyond the limits of the measurement. By seeking the Fourier spectrum of an infinite extent oscillation consistent with the measurements but otherwise having maximum entropy, the errors caused by finite data range can be reduced. Our procedure developed to implement this concept adapts auto-regressive linear prediction to extrapolate the signal in an effective and controllable manner. Difficulties encountered when processing actual ARPEFS data are discussed. A key feature of this approach is the ability to convert improved measurements (signal-to-noise or point density) into improved Fourier resolution.
Regression analysis of cytopathological data
Whittemore, A.S.; McLarty, J.W.; Fortson, N.; Anderson, K.
1982-12-01
Epithelial cells from the human body are frequently labelled according to one of several ordered levels of abnormality, ranging from normal to malignant. The label of the most abnormal cell in a specimen determines the score for the specimen. This paper presents a model for the regression of specimen scores against continuous and discrete variables, as in host exposure to carcinogens. Application to data and tests for adequacy of model fit are illustrated using sputum specimens obtained from a cohort of former asbestos workers.
Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data
NASA Astrophysics Data System (ADS)
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej
2016-04-01
GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.
Mass preserving nonrigid registration of CT lung images using cubic B-spline.
Yin, Youbing; Hoffman, Eric A; Lin, Ching-Long
2009-09-01
The authors propose a nonrigid image registration approach to align two computed-tomography (CT)-derived lung datasets acquired during breath-holds at two inspiratory levels when the image distortion between the two volumes is large. The goal is to derive a three-dimensional warping function that can be used in association with computational fluid dynamics studies. In contrast to the sum of squared intensity difference (SSD), a new similarity criterion, the sum of squared tissue volume difference (SSTVD), is introduced to take into account changes in reconstructed Hounsfield units (scaled attenuation coefficient, HU) with inflation. This new criterion aims to minimize the local tissue volume difference within the lungs between matched regions, thus preserving the tissue mass of the lungs if the tissue density is assumed to be relatively constant. The local tissue volume difference is contributed by two factors: Change in the regional volume due to the deformation and change in the fractional tissue content in a region due to inflation. The change in the regional volume is calculated from the Jacobian value derived from the warping function and the change in the fractional tissue content is estimated from reconstructed HU based on quantitative CT measures. A composite of multilevel B-spline is adopted to deform images and a sufficient condition is imposed to ensure a one-to-one mapping even for a registration pair with large volume difference. Parameters of the transformation model are optimized by a limited-memory quasi-Newton minimization approach in a multiresolution framework. To evaluate the effectiveness of the new similarity measure, the authors performed registrations for six lung volume pairs. Over 100 annotated landmarks located at vessel bifurcations were generated using a semiautomatic system. The results show that the SSTVD method yields smaller average landmark errors than the SSD method across all six registration pairs.
Ray-tracing method for creeping waves on arbitrarily shaped nonuniform rational B-splines surfaces.
Chen, Xi; He, Si-Yuan; Yu, Ding-Feng; Yin, Hong-Cheng; Hu, Wei-Dong; Zhu, Guo-Qiang
2013-04-01
An accurate creeping ray-tracing algorithm is presented in this paper to determine the tracks of creeping waves (or creeping rays) on arbitrarily shaped free-form parametric surfaces [nonuniform rational B-splines (NURBS) surfaces]. The main challenge in calculating the surface diffracted fields on NURBS surfaces is due to the difficulty in determining the geodesic paths along which the creeping rays propagate. On one single parametric surface patch, the geodesic paths need to be computed by solving the geodesic equations numerically. Furthermore, realistic objects are generally modeled as the union of several connected NURBS patches. Due to the discontinuity of the parameter between the patches, it is more complicated to compute geodesic paths on several connected patches than on one single patch. Thus, a creeping ray-tracing algorithm is presented in this paper to compute the geodesic paths of creeping rays on the complex objects that are modeled as the combination of several NURBS surface patches. In the algorithm, the creeping ray tracing on each surface patch is performed by solving the geodesic equations with a Runge-Kutta method. When the creeping ray propagates from one patch to another, a transition method is developed to handle the transition of the creeping ray tracing across the border between the patches. This creeping ray-tracing algorithm can meet practical requirements because it can be applied to the objects with complex shapes. The algorithm can also extend the applicability of NURBS for electromagnetic and optical applications. The validity and usefulness of the algorithm can be verified from the numerical results.
Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel
2010-10-22
A background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry (LC-FTIR) is proposed. The developed approach applies univariate background correction to each variable (i.e. each wave number) individually. Spectra measured in the region before and after each peak cluster are used as knots to model the variation of the eluent absorption intensity with time using cubic smoothing splines (CSS) functions. The new approach has been successfully tested on simulated as well as on real data sets obtained from injections of standard mixtures of polyethylene glycols with four different molecular weights in methanol:water, 2-propanol:water and ethanol:water gradients ranging from 30 to 90, 10 to 25 and from 10 to 40% (v/v) of organic modifier, respectively. Calibration lines showed high linearity with coefficients of determination higher than 0.98 and limits of detection between 0.4 and 1.4, 0.9 and 1.8, and 1.1 and 2.7 mgmL⁻¹ in methanol:water, 2-propanol:water and ethanol:water, respectively. Furthermore the method performance has been compared with a univariate background correction approach based on the use of a reference spectra matrix (UBC-RSM) to discuss the potential as well as pitfalls and drawbacks of the proposed approach. This method works without previous variable selection and provides minimal user-interaction, thus increasing drastically the feasibility of on-line coupling of gradient LC-FTIR.
3D shape recovery of a newborn skull using thin-plate splines.
Lapeer, R J; Prager, R W
2000-01-01
The objective of this paper is to construct a mesh-model of a newborn skull for finite element analysis to study its deformation when subjected to the forces present during labour. The current state of medical imaging technology has reached a level which allows accurate visualisation and shape recovery of biological organs and body-parts. However, a sufficiently large set of medical images cannot always be obtained, often because of practical or ethical reasons, and the requirement to recover the shape of the biological object of interest has to be met by other means. Such is the case for a newborn skull. A method to recover the three-dimensional (3D) shape from (minimum) two orthogonal atlas images of the object of interest and a homologous object is described. This method is based on matching landmarks and curves on the orthogonal images of the object of interest with corresponding landmarks and curves on the homologous or 'master'-object which is fully defined in 3D space. On the basis of this set of corresponding landmarks, a thin-plate spline function can be derived to warp from the 'master'-object space to the 'slave'-object space. This method is applied to recover the 3D shape of a newborn skull. Images from orthogonal view-planes are obtained from an atlas. The homologous object is an adult skull, obtained from CT-images made available by the Visible Human Project. After shape recovery, a mesh-model of the newborn skull is generated.
Multiatlas Segmentation as Nonparametric Regression
Awate, Suyash P.; Whitaker, Ross T.
2015-01-01
This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator’s convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528
Nonlinear adaptive networks: A little theory, a few applications
Jones, R.D.; Qian, S.; Barnes, C.W.; Bisset, K.R.; Bruce, G.M.; Lee, K.; Lee, L.A.; Mead, W.C.; O'Rourke, M.K.; Thode, L.E. ); Lee, Y.C.; Flake, G.W. Maryland Univ., College Park, MD ); Poli, I.J. Bologna Univ. )
1990-01-01
We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We than present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series tidal prediction in Venice Lagoon, sonar transient detection, control of nonlinear processes, balancing a double inverted pendulum and design advice for free electron lasers. 26 refs., 23 figs.
Prediction and control of chaotic processes using nonlinear adaptive networks
Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.
1990-01-01
We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.
Practical Session: Multiple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).
Genetic analyses of stillbirth in relation to litter size using random regression models.
Chen, C Y; Misztal, I; Tsuruta, S; Herring, W O; Holl, J; Culbertson, M
2010-12-01
Estimates of genetic parameters for number of stillborns (NSB) in relation to litter size (LS) were obtained with random regression models (RRM). Data were collected from 4 purebred Duroc nucleus farms between 2004 and 2008. Two data sets with 6,575 litters for the first parity (P1) and 6,259 litters for the second to fifth parity (P2-5) with a total of 8,217 and 5,066 animals in the pedigree were analyzed separately. Number of stillborns was studied as a trait on sow level. Fixed effects were contemporary groups (farm-year-season) and fixed cubic regression coefficients on LS with Legendre polynomials. Models for P2-5 included the fixed effect of parity. Random effects were additive genetic effects for both data sets with permanent environmental effects included for P2-5. Random effects modeled with Legendre polynomials (RRM-L), linear splines (RRM-S), and degree 0 B-splines (RRM-BS) with regressions on LS were used. For P1, the order of polynomial, the number of knots, and the number of intervals used for respective models were quadratic, 3, and 3, respectively. For P2-5, the same parameters were linear, 2, and 2, respectively. Heterogeneous residual variances were considered in the models. For P1, estimates of heritability were 12 to 15%, 5 to 6%, and 6 to 7% in LS 5, 9, and 13, respectively. For P2-5, estimates were 15 to 17%, 4 to 5%, and 4 to 6% in LS 6, 9, and 12, respectively. For P1, average estimates of genetic correlations between LS 5 to 9, 5 to 13, and 9 to 13 were 0.53, -0.29, and 0.65, respectively. For P2-5, same estimates averaged for RRM-L and RRM-S were 0.75, -0.21, and 0.50, respectively. For RRM-BS with 2 intervals, the correlation was 0.66 between LS 5 to 7 and 8 to 13. Parameters obtained by 3 RRM revealed the nonlinear relationship between additive genetic effect of NSB and the environmental deviation of LS. The negative correlations between the 2 extreme LS might possibly indicate different genetic bases on incidence of stillbirth.
Sim, K S; Kiani, M A; Nia, M E; Tso, C P
2014-01-01
A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. PMID:24164248
Sim, K S; Kiani, M A; Nia, M E; Tso, C P
2014-01-01
A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time.
Distributed lag and spline modeling for predicting energy expenditure from accelerometry in youth
Chen, Kong Y.; Acra, Sari A.; Buchowski, Maciej S.
2010-01-01
Movement sensing using accelerometers is commonly used for the measurement of physical activity (PA) and estimating energy expenditure (EE) under free-living conditions. The major limitation of this approach is lack of accuracy and precision in estimating EE, especially in low-intensity activities. Thus the objective of this study was to investigate benefits of a distributed lag spline (DLS) modeling approach for the prediction of total daily EE (TEE) and EE in sedentary (1.0–1.5 metabolic equivalents; MET), light (1.5–3.0 MET), and moderate/vigorous (≥3.0 MET) intensity activities in 10- to 17-year-old youth (n = 76). We also explored feasibility of the DLS modeling approach to predict physical activity EE (PAEE) and METs. Movement was measured by Actigraph accelerometers placed on the hip, wrist, and ankle. With whole-room indirect calorimeter as the reference standard, prediction models (Hip, Wrist, Ankle, Hip+Wrist, Hip+Wrist+Ankle) for TEE, PAEE, and MET were developed and validated using the fivefold cross-validation method. The TEE predictions by these DLS models were not significantly different from the room calorimeter measurements (all P > 0.05). The Hip+Wrist+Ankle predicted TEE better than other models and reduced prediction errors in moderate/vigorous PA for TEE, MET, and PAEE (all P < 0.001). The Hip+Wrist reduced prediction errors for the PAEE and MET at sedentary PA (P = 0.020 and 0.021) compared with the Hip. Models that included Wrist correctly classified time spent at light PA better than other models. The means and standard deviations of the prediction errors for the Hip+Wrist+Ankle and Hip were 0.4 ± 144.0 and 1.5 ± 164.7 kcal for the TEE, 0.0 ± 84.2 and 1.3 ± 104.7 kcal for the PAEE, and −1.1 ± 97.6 and −0.1 ± 108.6 MET min for the MET models. We conclude that the DLS approach for accelerometer data improves detailed EE prediction in youth. PMID:19959770
Random regression models using different functions to model milk flow in dairy cows.
Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Tonhati, H; Albuquerque, L G
2014-01-01
We analyzed 75,555 test-day milk flow records from 2175 primiparous Holstein cows that calved between 1997 and 2005. Milk flow was obtained by dividing the mean milk yield (kg) of the 3 daily milking by the total milking time (min) and was expressed as kg/min. Milk flow was grouped into 43 weekly classes. The analyses were performed using a single-trait Random Regression Models that included direct additive genetic, permanent environmental, and residual random effects. In addition, the contemporary group and linear and quadratic effects of cow age at calving were included as fixed effects. Fourth-order orthogonal Legendre polynomial of days in milk was used to model the mean trend in milk flow. The additive genetic and permanent environmental covariance functions were estimated using random regression Legendre polynomials and B-spline functions of days in milk. The model using a third-order Legendre polynomial for additive genetic effects and a sixth-order polynomial for permanent environmental effects, which contained 7 residual classes, proved to be the most adequate to describe variations in milk flow, and was also the most parsimonious. The heritability in milk flow estimated by the most parsimonious model was of moderate to high magnitude.
Flexible regression models for rate differences, risk differences and relative risks.
Donoghoe, Mark W; Marschner, Ian C
2015-05-01
Generalized additive models (GAMs) based on the binomial and Poisson distributions can be used to provide flexible semi-parametric modelling of binary and count outcomes. When used with the canonical link function, these GAMs provide semi-parametrically adjusted odds ratios and rate ratios. For adjustment of other effect measures, including rate differences, risk differences and relative risks, non-canonical link functions must be used together with a constrained parameter space. However, the algorithms used to fit these models typically rely on a form of the iteratively reweighted least squares algorithm, which can be numerically unstable when a constrained non-canonical model is used. We describe an application of a combinatorial EM algorithm to fit identity link Poisson, identity link binomial and log link binomial GAMs in order to estimate semi-parametrically adjusted rate differences, risk differences and relative risks. Using smooth regression functions based on B-splines, the method provides stable convergence to the maximum likelihood estimates, and it ensures that the estimates always remain within the parameter space. It is also straightforward to apply a monotonicity constraint to the smooth regression functions. We illustrate the method using data from a clinical trial in heart attack patients. PMID:25781711
Developmental Regression in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Rogers, Sally J.
2004-01-01
The occurrence of developmental regression in autism is one of the more puzzling features of this disorder. Although several studies have documented the validity of parental reports of regression using home videos, accumulating data suggest that most children who demonstrate regression also demonstrated previous, subtle, developmental differences.…
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Regression Analysis by Example. 5th Edition
ERIC Educational Resources Information Center
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
Bayesian Unimodal Density Regression for Causal Inference
ERIC Educational Resources Information Center
Karabatsos, George; Walker, Stephen G.
2011-01-01
Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
Insulin resistance: regression and clustering.
Yoon, Sangho; Assimes, Themistocles L; Quertermous, Thomas; Hsiao, Chin-Fu; Chuang, Lee-Ming; Hwu, Chii-Min; Rajaratnam, Bala; Olshen, Richard A
2014-01-01
In this paper we try to define insulin resistance (IR) precisely for a group of Chinese women. Our definition deliberately does not depend upon body mass index (BMI) or age, although in other studies, with particular random effects models quite different from models used here, BMI accounts for a large part of the variability in IR. We accomplish our goal through application of Gauss mixture vector quantization (GMVQ), a technique for clustering that was developed for application to lossy data compression. Defining data come from measurements that play major roles in medical practice. A precise statement of what the data are is in Section 1. Their family structures are described in detail. They concern levels of lipids and the results of an oral glucose tolerance test (OGTT). We apply GMVQ to residuals obtained from regressions of outcomes of an OGTT and lipids on functions of age and BMI that are inferred from the data. A bootstrap procedure developed for our family data supplemented by insights from other approaches leads us to believe that two clusters are appropriate for defining IR precisely. One cluster consists of women who are IR, and the other of women who seem not to be. Genes and other features are used to predict cluster membership. We argue that prediction with "main effects" is not satisfactory, but prediction that includes interactions may be. PMID:24887437
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582
Fully Regressive Melanoma: A Case Without Metastasis.
Ehrsam, Eric; Kallini, Joseph R; Lebas, Damien; Khachemoune, Amor; Modiano, Philippe; Cotten, Hervé
2016-08-01
Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418
NASA Technical Reports Server (NTRS)
Croom, D. R.; Dunham, R. E., Jr.
1975-01-01
The effectiveness of a forward-located spoiler, a spline, and span load alteration due to a flap configuration change as trailing-vortex-hazard alleviation methods was investigated. For the transport aircraft model in the normal approach configuration, the results indicate that either a forward-located spoiler or a spline is effective in reducing the trailing-vortex hazard. The results also indicate that large changes in span loading, due to retraction of the outboard flap, may be an effective method of reducing the trailing-vortex hazard.
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-01-01
Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match
Developmental regression in autism spectrum disorder
Al Backer, Nouf Backer
2015-01-01
The occurrence of developmental regression in autism spectrum disorder (ASD) is one of the most puzzling phenomena of this disorder. A little is known about the nature and mechanism of developmental regression in ASD. About one-third of young children with ASD lose some skills during the preschool period, usually speech, but sometimes also nonverbal communication, social or play skills are also affected. There is a lot of evidence suggesting that most children who demonstrate regression also had previous, subtle, developmental differences. It is difficult to predict the prognosis of autistic children with developmental regression. It seems that the earlier development of social, language, and attachment behaviors followed by regression does not predict the later recovery of skills or better developmental outcomes. The underlying mechanisms that lead to regression in autism are unknown. The role of subclinical epilepsy in the developmental regression of children with autism remains unclear. PMID:27493417
Hunt, R.L.
1983-12-27
An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.
Area-to-point regression kriging for pan-sharpening
NASA Astrophysics Data System (ADS)
Wang, Qunming; Shi, Wenzhong; Atkinson, Peter M.
2016-04-01
Pan-sharpening is a technique to combine the fine spatial resolution panchromatic (PAN) band with the coarse spatial resolution multispectral bands of the same satellite to create a fine spatial resolution multispectral image. In this paper, area-to-point regression kriging (ATPRK) is proposed for pan-sharpening. ATPRK considers the PAN band as the covariate. Moreover, ATPRK is extended with a local approach, called adaptive ATPRK (AATPRK), which fits a regression model using a local, non-stationary scheme such that the regression coefficients change across the image. The two geostatistical approaches, ATPRK and AATPRK, were compared to the 13 state-of-the-art pan-sharpening approaches summarized in Vivone et al. (2015) in experiments on three separate datasets. ATPRK and AATPRK produced more accurate pan-sharpened images than the 13 benchmark algorithms in all three experiments. Unlike the benchmark algorithms, the two geostatistical solutions precisely preserved the spectral properties of the original coarse data. Furthermore, ATPRK can be enhanced by a local scheme in AATRPK, in cases where the residuals from a global regression model are such that their spatial character varies locally.
LRGS: Linear Regression by Gibbs Sampling
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2016-02-01
LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
Quantile regression applied to spectral distance decay
Rocchini, D.; Cade, B.S.
2008-01-01
Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.
Regression Calibration with Heteroscedastic Error Variance
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187
Process modeling with the regression network.
van der Walt, T; Barnard, E; van Deventer, J
1995-01-01
A new connectionist network topology called the regression network is proposed. The structural and underlying mathematical features of the regression network are investigated. Emphasis is placed on the intricacies of the optimization process for the regression network and some measures to alleviate these difficulties of optimization are proposed and investigated. The ability of the regression network algorithm to perform either nonparametric or parametric optimization, as well as a combination of both, is also highlighted. It is further shown how the regression network can be used to model systems which are poorly understood on the basis of sparse data. A semi-empirical regression network model is developed for a metallurgical processing operation (a hydrocyclone classifier) by building mechanistic knowledge into the connectionist structure of the regression network model. Poorly understood aspects of the process are provided for by use of nonparametric regions within the structure of the semi-empirical connectionist model. The performance of the regression network model is compared to the corresponding generalization performance results obtained by some other nonparametric regression techniques.
Hybrid fuzzy regression with trapezoidal fuzzy data
NASA Astrophysics Data System (ADS)
Razzaghnia, T.; Danesh, S.; Maleki, A.
2011-12-01
In this regard, this research deals with a method for hybrid fuzzy least-squares regression. The extension of symmetric triangular fuzzy coefficients to asymmetric trapezoidal fuzzy coefficients is considered as an effective measure for removing unnecessary fuzziness of the linear fuzzy model. First, trapezoidal fuzzy variable is applied to derive a bivariate regression model. In the following, normal equations are formulated to solve the four parts of hybrid regression coefficients. Also the model is extended to multiple regression analysis. Eventually, method is compared with Y-H.O. chang's model.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.
Geodesic least squares regression on information manifolds
Verdoolaege, Geert
2014-12-05
We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.
ERIC Educational Resources Information Center
Bulcock, J. W.
The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…
Xu, S; Liu, B
2015-06-15
Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated doses of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of
ERIC Educational Resources Information Center
Harrell, William
1999-01-01
Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)
Anstis, Stuart
2013-01-01
It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.
NASA Astrophysics Data System (ADS)
Fokas, A. S.; Hauk, O.; Michel, V.
2012-03-01
The basic inverse problems for the functional imaging techniques of electroencephalography (EEG) and magnetoencephalography (MEG) consist in estimating the neuronal current in the brain from the measurement of the electric potential on the scalp and of the magnetic field outside the head. Here we present a rigorous derivation of the relevant formulae for a three-shell spherical model in the case of independent as well as simultaneous MEG and EEG measurements. Furthermore, we introduce an explicit and stable technique for the numerical implementation of these formulae via splines. Numerical examples are presented using the locations and the normal unit vectors of the real 102 magnetometers and 70 electrodes of the Elekta Neuromag (R) system. These results may have useful implications for the interpretation of the reconstructions obtained via the existing approaches.
Chang, Nai-Fu; Chiang, Cheng-Yi; Chen, Tung-Chien; Chen, Liang-Gee
2011-01-01
On-chip implementation of Hilbert-Huang transform (HHT) has great impact to analyze the non-linear and non-stationary biomedical signals on wearable or implantable sensors for the real-time applications. Cubic spline interpolation (CSI) consumes the most computation in HHT, and is the key component for the HHT processor. In tradition, CSI in HHT is usually performed after the collection of a large window of signals, and the long latency violates the realtime requirement of the applications. In this work, we propose to keep processing the incoming signals on-line with small and overlapped data windows without sacrificing the interpolation accuracy. 58% multiplication and 73% division of CSI are saved after the data reuse between the data windows.
NASA Astrophysics Data System (ADS)
Delogu, A.; Furini, F.
1991-09-01
Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.
Riester, K A; Peduzzi, P; Holford, T R; Ellison, R T; Donta, S T
1997-11-01
Stress gastritis is a serious problem in the intensive care unit population. The recent discovery of the causal nature of Helicobacter pylori (H. pylori) in the development of gastric ulcers led us to examine its relationship with stress gastritis. We investigated this relationship in 874 veterans admitted to intensive care units who were tested for the presence of H. pylori and followed for 6 weeks for the development of stress gastritis. We fit spline models to assess functional relationships and used the logistic model to determine the association between H. pylori and stress gastritis. The predictive ability of the model was assessed with receiver operating characteristic (ROC) curve analysis and validated with the bootstrapping technique. Increased anti-H. pylori immunoglobulin A concentrations were found to be an important predictor of stress gastritis independent of other known risk factors.
Chang, Nai-Fu; Chiang, Cheng-Yi; Chen, Tung-Chien; Chen, Liang-Gee
2011-01-01
On-chip implementation of Hilbert-Huang transform (HHT) has great impact to analyze the non-linear and non-stationary biomedical signals on wearable or implantable sensors for the real-time applications. Cubic spline interpolation (CSI) consumes the most computation in HHT, and is the key component for the HHT processor. In tradition, CSI in HHT is usually performed after the collection of a large window of signals, and the long latency violates the realtime requirement of the applications. In this work, we propose to keep processing the incoming signals on-line with small and overlapped data windows without sacrificing the interpolation accuracy. 58% multiplication and 73% division of CSI are saved after the data reuse between the data windows. PMID:22255972
Low temperature-induced circulating triiodothyronine accelerates seasonal testicular regression.
Ikegami, Keisuke; Atsumi, Yusuke; Yorinaga, Eriko; Ono, Hiroko; Murayama, Itaru; Nakane, Yusuke; Ota, Wataru; Arai, Natsumi; Tega, Akinori; Iigo, Masayuki; Darras, Veerle M; Tsutsui, Kazuyoshi; Hayashi, Yoshitaka; Yoshida, Shosei; Yoshimura, Takashi
2015-02-01
In temperate zones, animals restrict breeding to specific seasons to maximize the survival of their offspring. Birds have evolved highly sophisticated mechanisms of seasonal regulation, and their testicular mass can change 100-fold within a few weeks. Recent studies on Japanese quail revealed that seasonal gonadal development is regulated by central thyroid hormone activation within the hypothalamus, depending on the photoperiodic changes. By contrast, the mechanisms underlying seasonal testicular regression remain unclear. Here we show the effects of short day and low temperature on testicular regression in quail. Low temperature stimulus accelerated short day-induced testicular regression by shutting down the hypothalamus-pituitary-gonadal axis and inducing meiotic arrest and germ cell apoptosis. Induction of T3 coincided with the climax of testicular regression. Temporal gene expression analysis over the course of apoptosis revealed the suppression of LH response genes and activation of T3 response genes involved in amphibian metamorphosis within the testis. Daily ip administration of T3 mimicked the effects of low temperature stimulus on germ cell apoptosis and testicular mass. Although type 2 deiodinase, a thyroid hormone-activating enzyme, in the brown adipose tissue generates circulating T3 under low-temperature conditions in mammals, there is no distinct brown adipose tissue in birds. In birds, type 2 deiodinase is induced by low temperature exclusively in the liver, which appears to be caused by increased food consumption. We conclude that birds use low temperature-induced circulating T3 not only for adaptive thermoregulation but also to trigger apoptosis to accelerate seasonal testicular regression.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Li, Zheng; Moore, Kathleen; Thai, Theresa; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
Ovarian cancer is the second most common cancer amongst gynecologic malignancies, and has the highest death rate. Since the majority of ovarian cancer patients (>75%) are diagnosed in the advanced stage with tumor metastasis, chemotherapy is often required after surgery to remove the primary ovarian tumors. In order to quickly assess patient response to the chemotherapy in the clinical trials, two sets of CT examinations are taken pre- and post-therapy (e.g., after 6 weeks). Treatment efficacy is then evaluated based on Response Evaluation Criteria in Solid Tumors (RECIST) guideline, whereby tumor size is measured by the longest diameter on one CT image slice and only a subset of selected tumors are tracked. However, this criterion cannot fully represent the volumetric changes of the tumors and might miss potentially problematic unmarked tumors. Thus, we developed a new CAD approach to measure and analyze volumetric tumor growth/shrinkage using a cubic B-spline deformable image registration method. In this initial study, on 14 sets of pre- and post-treatment CT scans, we registered the two consecutive scans using cubic B-spline registration in a multiresolution (from coarse to fine) framework. We used Mattes mutual information metric as the similarity criterion and the L-BFGS-B optimizer. The results show that our method can quantify volumetric changes in the tumors more accurately than RECIST, and also detect (highlight) potentially problematic regions that were not originally targeted by radiologists. Despite the encouraging results of this preliminary study, further validation of scheme performance is required using large and diverse datasets in future.
Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.
2012-04-15
Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.
Suppression Situations in Multiple Linear Regression
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…
Deriving the Regression Equation without Using Calculus
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
2004-01-01
Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…
A Practical Guide to Regression Discontinuity
ERIC Educational Resources Information Center
Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard
2012-01-01
Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…
Dealing with Outliers: Robust, Resistant Regression
ERIC Educational Resources Information Center
Glasser, Leslie
2007-01-01
Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…
Cross-Validation, Shrinkage, and Multiple Regression.
ERIC Educational Resources Information Center
Hynes, Kevin
One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…
Application and Interpretation of Hierarchical Multiple Regression.
Jeong, Younhee; Jung, Mi Jung
2016-01-01
The authors reported the association between motivation and self-management behavior of individuals with chronic low back pain after adjusting control variables using hierarchical multiple regression (). This article describes details of the hierarchical regression applying the actual data used in the article by , including how to test assumptions, run the statistical tests, and report the results. PMID:27648796
Regression Analysis: Legal Applications in Institutional Research
ERIC Educational Resources Information Center
Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.
2008-01-01
This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…
A Simulation Investigation of Principal Component Regression.
ERIC Educational Resources Information Center
Allen, David E.
Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…
Incremental Net Effects in Multiple Regression
ERIC Educational Resources Information Center
Lipovetsky, Stan; Conklin, Michael
2005-01-01
A regular problem in regression analysis is estimating the comparative importance of the predictors in the model. This work considers the 'net effects', or shares of the predictors in the coefficient of the multiple determination, which is a widely used characteristic of the quality of a regression model. Estimation of the net effects can be a…
Illustration of Regression towards the Means
ERIC Educational Resources Information Center
Govindaraju, K.; Haslett, S. J.
2008-01-01
This article presents a procedure for generating a sequence of data sets which will yield exactly the same fitted simple linear regression equation y = a + bx. Unless rescaled, the generated data sets will have progressively smaller variability for the two variables, and the associated response and covariate will "regress" towards their…
Regression Analysis and the Sociological Imagination
ERIC Educational Resources Information Center
De Maio, Fernando
2014-01-01
Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.
Three-Dimensional Modeling in Linear Regression.
ERIC Educational Resources Information Center
Herman, James D.
Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…
NASA Astrophysics Data System (ADS)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
Nodule Regression in Adults With Nodular Gastritis
Kim, Ji Wan; Lee, Sun-Young; Kim, Jeong Hwan; Sung, In-Kyung; Park, Hyung Seok; Shim, Chan-Sup; Han, Hye Seung
2015-01-01
Background Nodular gastritis (NG) is associated with the presence of Helicobacter pylori infection, but there are controversies on nodule regression in adults. The aim of this study was to analyze the factors that are related to the nodule regression in adults diagnosed as NG. Methods Adult population who were diagnosed as NG with H. pylori infection during esophagogastroduodenoscopy (EGD) at our center were included. Changes in the size and location of the nodules, status of H. pylori infection, upper gastrointestinal (UGI) symptom, EGD and pathology findings were analyzed between the initial and follow-up tests. Results Of the 117 NG patients, 66.7% (12/18) of the eradicated NG patients showed nodule regression after H. pylori eradication, whereas 9.9% (9/99) of the non-eradicated NG patients showed spontaneous nodule regression without H. pylori eradication (P < 0.001). Nodule regression was more frequent in NG patients with antral nodule location (P = 0.010), small-sized nodules (P = 0.029), H. pylori eradication (P < 0.001), UGI symptom (P = 0.007), and a long-term follow-up period (P = 0.030). On the logistic regression analysis, nodule regression was inversely correlated with the persistent H. pylori infection on the follow-up test (odds ratio (OR): 0.020, 95% confidence interval (CI): 0.003 - 0.137, P < 0.001) and short-term follow-up period < 30.5 months (OR: 0.140, 95% CI: 0.028 - 0.700, P = 0.017). Conclusions In adults with NG, H. pylori eradication is the most significant factor associated with nodule regression. Long-term follow-up period is also correlated with nodule regression, but is less significant than H. pylori eradication. Our findings suggest that H. pylori eradication should be considered to promote nodule regression in NG patients with H. pylori infection.
Generalized Multilevel Function-on-Scalar Regression and Principal Component Analysis
Goldsmith, Jeff; Zipunnikov, Vadim; Schrack, Jennifer
2015-01-01
Summary This manuscript considers regression models for generalized, multilevel functional responses: functions are generalized in that they follow an exponential family distribution and multilevel in that they are clustered within groups or subjects. This data structure is increasingly common across scientific domains and is exemplified by our motivating example, in which binary curves indicating physical activity or inactivity are observed for nearly six hundred subjects over five days. We use a generalized linear model to incorporate scalar covariates into the mean structure, and decompose subject-specific and subject-day-specific deviations using multilevel functional principal components analysis. Thus, functional fixed effects are estimated while accounting for within-function and within-subject correlations, and major directions of variability within and between subjects are identified. Fixed effect coefficient functions and principal component basis functions are estimated using penalized splines; model parameters are estimated in a Bayesian framework using Stan, a programming language that implements a Hamiltonian Monte Carlo sampler. Simulations designed to mimic the application have good estimation and inferential properties with reasonable computation times for moderate datasets, in both cross-sectional and multilevel scenarios; code is publicly available. In the application we identify effects of age and BMI on the time-specific change in probability of being active over a twenty-four hour period; in addition, the principal components analysis identifies the patterns of activity that distinguish subjects and days within subjects. PMID:25620473
Generalized multilevel function-on-scalar regression and principal component analysis.
Goldsmith, Jeff; Zipunnikov, Vadim; Schrack, Jennifer
2015-06-01
This manuscript considers regression models for generalized, multilevel functional responses: functions are generalized in that they follow an exponential family distribution and multilevel in that they are clustered within groups or subjects. This data structure is increasingly common across scientific domains and is exemplified by our motivating example, in which binary curves indicating physical activity or inactivity are observed for nearly 600 subjects over 5 days. We use a generalized linear model to incorporate scalar covariates into the mean structure, and decompose subject-specific and subject-day-specific deviations using multilevel functional principal components analysis. Thus, functional fixed effects are estimated while accounting for within-function and within-subject correlations, and major directions of variability within and between subjects are identified. Fixed effect coefficient functions and principal component basis functions are estimated using penalized splines; model parameters are estimated in a Bayesian framework using Stan, a programming language that implements a Hamiltonian Monte Carlo sampler. Simulations designed to mimic the application have good estimation and inferential properties with reasonable computation times for moderate datasets, in both cross-sectional and multilevel scenarios; code is publicly available. In the application we identify effects of age and BMI on the time-specific change in probability of being active over a 24-hour period; in addition, the principal components analysis identifies the patterns of activity that distinguish subjects and days within subjects.
The basis function regression in pharmaceutical analysis. Theory and example of application.
Komsta, Lukasz; Skibiński, Robert; Paryło, Marta; Dudek, Karolina
2008-08-01
The BFR (Basis Function Regression) is an interesting alternative to common techniques (such as PCR or PLS) in chemometrics. It is based on projecting the spectral information onto some number of equally spaced spline bases, than obtaining integrals of resulted curves. Existing references show that in certain cases it can reduce almost twice the RMSEP values. As this technique is not so popular in chemometrics nor applied in pharmaceutical analysis, it is desirable to present its theoretical considerations and implementation (with example MATLAB/Octave code). As an illustrative example we present the chemometric model for content recognition of a tablet (12 possible compounds in binary or ternary combinations) from the UV spectrum of its methanolic extract. The BFR technique gave lowest prediction error and the estimators obtained have more meritorical meaning than in case of PCR, PLS and other techniques used for comparison. In our opinion this technique should be considered in any chemometric approach as it often shows better performance. PMID:18450403
Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression
Peng, Limin; Xu, Jinfeng; Kutner, Nancy
2013-01-01
Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515
Regression modeling of ground-water flow
Cooley, R.L.; Naff, R.L.
1985-01-01
Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)
Technology Transfer Automated Retrieval System (TEKTRAN)
In precision agriculture regression has been used widely to quality the relationship between soil attributes and other environmental variables. However, spatial correlation existing in soil samples usually makes the regression model suboptimal. In this study, a regression-kriging method was attemp...
NASA Astrophysics Data System (ADS)
Darnah
2016-04-01
Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.
Regression of altitude-produced cardiac hypertrophy.
NASA Technical Reports Server (NTRS)
Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.
1973-01-01
The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.
Miralles, Aurélien; Hipsley, Christy A.; Erens, Jesse; Gehara, Marcelo; Rakotoarison, Andolalao; Glaw, Frank; Müller, Johannes; Vences, Miguel
2015-01-01
Scincine lizards in Madagascar form an endemic clade of about 60 species exhibiting a variety of ecomorphological adaptations. Several subclades have adapted to burrowing and convergently regressed their limbs and eyes, resulting in a variety of partial and completely limbless morphologies among extant taxa. However, patterns of limb regression in these taxa have not been studied in detail. Here we fill this gap in knowledge by providing a phylogenetic analysis of DNA sequences of three mitochondrial and four nuclear gene fragments in an extended sampling of Malagasy skinks, and microtomographic analyses of osteology of various burrowing taxa adapted to sand substrate. Based on our data we propose to (i) consider Sirenoscincus Sakata & Hikida, 2003, as junior synonym of Voeltzkowia Boettger, 1893; (ii) resurrect the genus name Grandidierina Mocquard, 1894, for four species previously included in Voeltzkowia; and (iii) consider Androngo Brygoo, 1982, as junior synonym of Pygomeles Grandidier, 1867. By supporting the clade consisting of the limbless Voeltzkowia mira and the forelimb-only taxa V. mobydick and V. yamagishii, our data indicate that full regression of limbs and eyes occurred in parallel twice in the genus Voeltzkowia (as hitherto defined) that we consider as a sand-swimming ecomorph: in the Voeltzkowia clade sensu stricto the regression first affected the hindlimbs and subsequently the forelimbs, whereas the Grandidierina clade first regressed the forelimbs and subsequently the hindlimbs following the pattern prevalent in squamates. Timetree reconstructions for the Malagasy Scincidae contain a substantial amount of uncertainty due to the absence of suitable primary fossil calibrations. However, our preliminary reconstructions suggest rapid limb regression in Malagasy scincids with an estimated maximal duration of 6 MYr for a complete regression in Paracontias, and 4 and 8 MYr respectively for complete regression of forelimbs in Grandidierina and
ERIC Educational Resources Information Center
Exceptional Parent, 1987
1987-01-01
Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)
REGRESSION ESTIMATES FOR TOPOLOGICAL-HYDROGRAPH INPUT.
Karlinger, Michael R.; Guertin, D. Phillip; Troutman, Brent M.
1988-01-01
Physiographic, hydrologic, and rainfall data from 18 small drainage basins in semiarid, central Wyoming were used to calibrate topological, unit-hydrograph models for celerity, the average rate of travel of a flood wave through the basin. The data set consisted of basin characteristics and hydrologic data for the 18 basins and rainfall data for 68 storms. Calibrated values of celerity and peak discharges subsequently were regressed as a function of the basin characteristics and excess rainfall volume. Predicted values obtained in this way can be used as input for estimating hydrographs in ungaged basins. The regression models included ordinary least-squares and seemingly unrelated regression. This latter regression model jointly estimated the celerity and peak discharge.
TWSVR: Regression via Twin Support Vector Machine.
Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh
2016-02-01
Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets.
TWSVR: Regression via Twin Support Vector Machine.
Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh
2016-02-01
Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets. PMID:26624223
Some Simple Computational Formulas for Multiple Regression
ERIC Educational Resources Information Center
Aiken, Lewis R., Jr.
1974-01-01
Short-cut formulas are presented for direct computation of the beta weights, the standard errors of the beta weights, and the multiple correlation coefficient for multiple regression problems involving three independent variables and one dependent variable. (Author)
The Geometry of Enhancement in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.
2011-01-01
In linear multiple regression, "enhancement" is said to occur when R[superscript 2] = b[prime]r greater than r[prime]r, where b is a p x 1 vector of standardized regression coefficients and r is a p x 1 vector of correlations between a criterion y and a set of standardized regressors, x. When p = 1 then b [is congruent to] r and enhancement cannot…
There is No Quantum Regression Theorem
Ford, G.W.; OConnell, R.F.
1996-07-01
The Onsager regression hypothesis states that the regression of fluctuations is governed by macroscopic equations describing the approach to equilibrium. It is here asserted that this hypothesis fails in the quantum case. This is shown first by explicit calculation for the example of quantum Brownian motion of an oscillator and then in general from the fluctuation-dissipation theorem. It is asserted that the correct generalization of the Onsager hypothesis is the fluctuation-dissipation theorem. {copyright} {ital 1996 The American Physical Society.}
Synthesizing regression results: a factored likelihood method.
Wu, Meng-Jia; Becker, Betsy Jane
2013-06-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported in the regression studies to calculate synthesized standardized slopes. It uses available correlations to estimate missing ones through a series of regressions, allowing us to synthesize correlations among variables as if each included study contained all the same variables. Great accuracy and stability of this method under fixed-effects models were found through Monte Carlo simulation. An example was provided to demonstrate the steps for calculating the synthesized slopes through sweep operators. By rearranging the predictors in the included regression models or omitting a relatively small number of correlations from those models, we can easily apply the factored likelihood method to many situations involving synthesis of linear models. Limitations and other possible methods for synthesizing more complicated models are discussed. Copyright © 2012 John Wiley & Sons, Ltd. PMID:26053653
Post-processing through linear regression
NASA Astrophysics Data System (ADS)
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.
ERIC Educational Resources Information Center
Bockenholt, Ulf
1999-01-01
Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…
NASA Astrophysics Data System (ADS)
Gutierrez, R. R.; Abad, J. D.; Parsons, D. R.
2011-12-01
The quantification of the variability of bedform geometry is necessary for scientific and practical purposes. For the former purpose, it is necessary for modeling bed roughness cross-strata sets, vertical sorting, sediment transport rates, transition between two-dimensional and three-dimensional dunes, velocity pulsations, flow over bedforms, interaction between flow over bedforms and groundwater, and transport of contaminants. For practical purposes the study of the variability of bedforms is important to predict floods and flow resistance, to predict uplifting of manmade structures underneath a river beds, to track future changes of bedform and biota following dam removal, to estimate the relationship between bedform characteristics and biota, in river restoration, among others. Currently there is not a standard nomenclature and procedure to separate bedform features such as sand waves, dunes and ripples which are commonly present in large rivers. Likewise, there is not a standard definition of the scope for the different scales of such bedform features. The present study proposes a standardization of the nomenclature and symbolic representation of bedform features and elaborates on the combined application of robust spline filter and continuous wavelet transforms to separate the morphodynamic features. A fully automated robust spline procedure for uniformly sampled datasets is used. The algorithm, based on a penalized least squares method, allows fast smoothing of uniformly sampled data elements by means of the discrete cosine transform. The wavelet transforms, which overcome some limitations of the Fourier transforms, are applied to identify the spectrum of bedform wavelengths. The proposed separation method is applied to a 370-m width and 1.028-km length swath bed morphology data of the Parana River, one of the world's largest rivers, located in Argentina. After the separation is carried out, the descriptors (e.g. wavelength, slope, and amplitude for both
Correlation studies for B-spline modeled F2 Chapman parameters obtained from FORMOSAT-3/COSMIC data
NASA Astrophysics Data System (ADS)
Limberger, M.; Liang, W.; Schmidt, M.; Dettmering, D.; Hernández-Pajares, M.; Hugentobler, U.
2014-12-01
The determination of ionospheric key quantities such as the maximum electron density of the F2 layer NmF2, the corresponding F2 peak height hmF2 and the F2 scale height HF2 are of high relevance in 4-D ionosphere modeling to provide information on the vertical structure of the electron density (Ne). The Ne distribution with respect to height can, for instance, be modeled by the commonly accepted F2 Chapman layer. An adequate and observation driven description of the vertical Ne variation can be obtained from electron density profiles (EDPs) derived by ionospheric radio occultation measurements between GPS and low Earth orbiter (LEO) satellites. For these purposes, the six FORMOSAT-3/COSMIC (F3/C) satellites provide an excellent opportunity to collect EDPs that cover most of the ionospheric region, in particular the F2 layer. For the contents of this paper, F3/C EDPs have been exploited to determine NmF2, hmF2 and HF2 within a regional modeling approach. As mathematical base functions, endpoint-interpolating polynomial B-splines are considered to model the key parameters with respect to longitude, latitude and time. The description of deterministic processes and the verification of this modeling approach have been published previously in Limberger et al. (2013), whereas this paper should be considered as an extension dealing with related correlation studies, a topic to which less attention has been paid in the literature. Relations between the B-spline series coefficients regarding specific key parameters as well as dependencies between the three F2 Chapman key parameters are in the main focus. Dependencies are interpreted from the post-derived correlation matrices as a result of (1) a simulated scenario without data gaps by taking dense, homogenously distributed profiles into account and (2) two real data scenarios on 1 July 2008 and 1 July 2012 including sparsely, inhomogeneously distributed F3/C EDPs. Moderate correlations between hmF2 and HF2 as well as inverse
Pandithevan, Ponnusamy
2015-02-01
In tissue engineering, the successful modeling of scaffold for the replacement of damaged body parts depends mainly on external geometry and internal architecture in order to avoid the adverse effects such as pain and lack of ability to transfer the load to the surrounding bone. Due to flexibility in controlling the parameters, layered manufacturing processes are widely used for the fabrication of bone tissue engineering scaffold with the given computer-aided design model. This article presents a squared distance minimization approach for weight optimization of non-uniform rational B-spline curve and surface to modify the geometry that exactly fits into the defect region automatically and thus to fabricate the scaffold specific to subject and site. The study showed that though the errors associated in the B-spline curve and surface were minimized by squared distance method than point distance method and tangent distance method, the errors could be minimized further in the rational B-spline curve and surface as the optimal weight could change the shape that desired for the defect site. In order to measure the efficacy of the present approach, the results were compared with point distance method and tangent distance method in optimizing the non-rational and rational B-spline curve and surface fitting for the defect site. The optimized geometry then allowed to construct the scaffold in fused deposition modeling system as an example. The result revealed that the squared distance-based weight optimization of the rational curve and surface in making the defect specific geometry best fits into the defect region than the other methods used.
NASA Technical Reports Server (NTRS)
2005-01-01
The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).
NASA Astrophysics Data System (ADS)
Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan
2016-09-01
In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.
An empirical evaluation of spatial regression models
NASA Astrophysics Data System (ADS)
Gao, Xiaolu; Asami, Yasushi; Chung, Chang-Jo F.
2006-10-01
Conventional statistical methods are often ineffective to evaluate spatial regression models. One reason is that spatial regression models usually have more parameters or smaller sample sizes than a simple model, so their degree of freedom is reduced. Thus, it is often unlikely to evaluate them based on traditional tests. Another reason, which is theoretically associated with statistical methods, is that statistical criteria are crucially dependent on such assumptions as normality, independence, and homogeneity. This may create problems because the assumptions are open for testing. In view of these problems, this paper proposes an alternative empirical evaluation method. To illustrate the idea, a few hedonic regression models for a house and land price data set are evaluated, including a simple, ordinary linear regression model and three spatial models. Their performance as to how well the price of the house and land can be predicted is examined. With a cross-validation technique, the prices at each sample point are predicted with a model estimated with the samples excluding the one being concerned. Then, empirical criteria are established whereby the predicted prices are compared with the real, observed prices. The proposed method provides an objective guidance for the selection of a suitable model specification for a data set. Moreover, the method is seen as an alternative way to test the significance of the spatial relationships being concerned in spatial regression models.
Mental chronometry with simple linear regression.
Chen, J Y
1997-10-01
Typically, mental chronometry is performed by means of introducing an independent variable postulated to affect selectively some stage of a presumed multistage process. However, the effect could be a global one that spreads proportionally over all stages of the process. Currently, there is no method to test this possibility although simple linear regression might serve the purpose. In the present study, the regression approach was tested with tasks (memory scanning and mental rotation) that involved a selective effect and with a task (word superiority effect) that involved a global effect, by the dominant theories. The results indicate (1) the manipulation of the size of a memory set or of angular disparity affects the intercept of the regression function that relates the times for memory scanning with different set sizes or for mental rotation with different angular disparities and (2) the manipulation of context affects the slope of the regression function that relates the times for detecting a target character under word and nonword conditions. These ratify the regression approach as a useful method for doing mental chronometry. PMID:9347535
Hierarchical regression for analyses of multiple outcomes.
Richardson, David B; Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R; Chu, Haitao
2015-09-01
In cohort mortality studies, there often is interest in associations between an exposure of primary interest and mortality due to a range of different causes. A standard approach to such analyses involves fitting a separate regression model for each type of outcome. However, the statistical precision of some estimated associations may be poor because of sparse data. In this paper, we describe a hierarchical regression model for estimation of parameters describing outcome-specific relative rate functions and associated credible intervals. The proposed model uses background stratification to provide flexible control for the outcome-specific associations of potential confounders, and it employs a hierarchical "shrinkage" approach to stabilize estimates of an exposure's associations with mortality due to different causes of death. The approach is illustrated in analyses of cancer mortality in 2 cohorts: a cohort of dioxin-exposed US chemical workers and a cohort of radiation-exposed Japanese atomic bomb survivors. Compared with standard regression estimates of associations, hierarchical regression yielded estimates with improved precision that tended to have less extreme values. The hierarchical regression approach also allowed the fitting of models with effect-measure modification. The proposed hierarchical approach can yield estimates of association that are more precise than conventional estimates when one wishes to estimate associations with multiple outcomes. PMID:26232395
MULTILINEAR TENSOR REGRESSION FOR LONGITUDINAL RELATIONAL DATA
Hoff, Peter D.
2016-01-01
A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kronecker-structured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity. PMID:27458495
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Zhang, Zhikun; Zhou, Yingshun; Wang, Hongning; Zeng, Fanya; Yang, Xin; Zhang, Yi; Zhang, Anyun
2016-01-01
Infectious bronchitis virus (IBV) is a highly variable virus with a large number of genotypes. During 2011-2012, nineteen wild IBV strains were isolated in China. Sequence analysis showed that these isolates were divided into five sub-clusters: A2-like, CKCHLDL08I-like, SAIBK-like, KM91-like and TW97/4-like. Phylogenetic analysis based on the 1118 sequences available on line suggested that all IBVs were classified into six clusters. The prevalent strains including all the isolates were in cluster VI with a 0.194-0.259 genetic distance to Mass type vaccines. In addition, we introduced the smoothing spline clustering (SSC) method to estimate the highly variable sites for some sub-clusters. The results showed that highly variable sites range from sub-clusters, the N-terminal sequences of 4/91-like, TW97/4-like and Arkansas-like are more variable than other sub-clusters. This is the first time that the SSC method has been used for the evolution study of IBV.
Dougherty, Geoff; Johnson, Michael J
2008-03-01
The clinical recognition of abnormal vascular tortuosity is important in the diagnosis of many diseases. Metrics based on three-dimensional (3D) curvature, using approximating polynomial spline-fitting to "data balls" centered along the mid-line of the vessel, minimize digitization errors and give tortuosity values largely independent of the resolution of the imaging system. We applied two of these metrics to a number of clinical vascular systems, using both 2D and 3D datasets. Using abdominal aortograms of low tortuosity, we established their validity by their strong correlation with the ranking of an expert panel of three vascular surgeons. The values of the Spearman rank correlation coefficient between our rankings, using a data ball radius of one-quarter of the local vessel radius, and the average ranking of the expert panel were 0.96 (with a 95% confidence interval of [0.91, 0.99]) for the mean curvature and 0.98 ([0.94, 0.99]) for the root-mean-square (RMS) curvature. These confidence intervals indicate that our automated analysis is producing rankings whose reliability is similar to that of a human expert, and is significantly better than that achieved with existing algorithms. The metrics provided good discrimination between vessels of different tortuosity for both 2D and 3D datasets, and produced values sufficiently discriminating to assess the relative utility of arteries for endoluminal repair of aneurysms. PMID:17419088
NASA Astrophysics Data System (ADS)
Askari, H.; Esmailzadeh, E.; Barari, A.
2015-09-01
A novel procedure for the nonlinear vibration analysis of curved beam is presented. The Non-Uniform Rational B-Spline (NURBS) is combined with the Euler-Bernoulli beam theory to define the curvature of the structure. The governing equation of motion and the general frequency formula, using the NURBS variables, is applicable for any type of curvatures, is developed. The Galerkin procedure is implemented to obtain the nonlinear ordinary differential equation of curved system and the multiple time scales method is utilized to find the corresponding frequency responses. As a case study, the nonlinear vibration of carbon nanotubes with different shapes of curvature is investigated. The effect of oscillation amplitude and the waviness on the natural frequency of the curved nanotube is evaluated and the primary resonance case of system with respect to the variations of different parameters is discussed. For the sake of comparison of the results obtained with those from the molecular dynamic simulation, the natural frequencies evaluated from the proposed approach are compared with those reported in literature for few types of carbon nanotube simulation.
Shao, Chenxi Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong
2015-07-15
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.
Oguro, Sota; Tokuda, Junichi; Elhawary, Haytham; Haker, Steven; Kikinis, Ron; Tempany, Clare M.C.; Hata, Nobuhiko
2009-01-01
Purpose To apply an intensity-based nonrigid registration algorithm to MRI-guided prostate brachytherapy clinical data and to assess its accuracy. Materials and Methods A nonrigid registration of preoperative MRI to intraoperative MRI images was carried out in 16 cases using a Basis-Spline algorithm in a retrospective manner. The registration was assessed qualitatively by experts’ visual inspection and quantitatively by measuring the Dice similarity coefficient (DSC) for total gland (TG), central gland (CG), and peripheral zone (PZ), the mutual information (MI) metric, and the fiducial registration error (FRE) between corresponding anatomical landmarks for both the nonrigid and a rigid registration method. Results All 16 cases were successfully registered in less than 5 min. After the nonrigid registration, DSC values for TG, CG, PZ were 0.91, 0.89, 0.79, respectively, the MI metric was −0.19 ± 0.07 and FRE presented a value of 2.3 ± 1.8 mm. All the metrics were significantly better than in the case of rigid registration, as determined by one-sided t-tests. Conclusion The intensity-based nonrigid registration method using clinical data was demonstrated to be feasible and showed statistically improved metrics when compare to only rigid registration. The method is a valuable tool to integrate pre- and intraoperative images for brachytherapy. PMID:19856437
Siefert, Andrew W.; Icenogle, David A.; Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Rossignac, Jarek; Lerakis, Stamatios; Yoganathan, Ajit P.
2013-01-01
Patient-specific models of the heart’s mitral valve (MV) exhibit potential for surgical planning. While advances in 3D echocardiography (3DE) have provided adequate resolution to extract MV leaflet geometry, no study has quantitatively assessed the accuracy of their modeled leaflets versus a ground-truth standard for temporal frames beyond systolic closure or for differing valvular dysfunctions. The accuracy of a 3DE-based segmentation methodology based on J-splines was assessed for porcine MVs with known 4D leaflet coordinates within a pulsatile simulator during closure, peak closure, and opening for a control, prolapsed, and billowing MV model. For all time points, the mean distance error between the segmented models and ground-truth data were 0.40±0.32 mm, 0.52±0.51 mm, and 0.74±0.69 mm for the control, flail, and billowing models. For all models and temporal frames, 95% of the distance errors were below 1.64 mm. When applied to a patient data set, segmentation was able to confirm a regurgitant orifice and post-operative improvements in coaptation. This study provides an experimental platform for assessing the accuracy of an MV segmentation methodology at phases beyond systolic closure and for differing MV dysfunctions. Results demonstrate the accuracy of a MV segmentation methodology for the development of future surgical planning tools. PMID:23460042
Kananenka, Alexei A; Welden, Alicia Rae; Lan, Tran Nguyen; Gull, Emanuel; Zgid, Dominika
2016-05-10
The popular, stable, robust, and computationally inexpensive cubic spline interpolation algorithm is adopted and used for finite temperature Green's function calculations of realistic systems. We demonstrate that with appropriate modifications the temperature dependence can be preserved while the Green's function grid size can be reduced by about 2 orders of magnitude by replacing the standard Matsubara frequency grid with a sparser grid and a set of interpolation coefficients. We benchmarked the accuracy of our algorithm as a function of a single parameter sensitive to the shape of the Green's function. Through numerous examples, we confirmed that our algorithm can be utilized in a systematically improvable, controlled, and black-box manner and highly accurate one- and two-body energies and one-particle density matrices can be obtained using only around 5% of the original grid points. Additionally, we established that to improve accuracy by an order of magnitude, the number of grid points needs to be doubled, whereas for the Matsubara frequency grid, an order of magnitude more grid points must be used. This suggests that realistic calculations with large basis sets that were previously out of reach because they required enormous grid sizes may now become feasible. PMID:27049642
Zhan, Choujun; Situ, Wuchao; Yeung, Lam Fat; Tsang, Peter Wai-Ming; Yang, Genke
2014-01-01
The inverse problem of identifying unknown parameters of known structure dynamical biological systems, which are modelled by ordinary differential equations or delay differential equations, from experimental data is treated in this paper. A two stage approach is adopted: first, combine spline theory and Nonlinear Programming (NLP), the parameter estimation problem is formulated as an optimization problem with only algebraic constraints; then, a new differential evolution (DE) algorithm is proposed to find a feasible solution. The approach is designed to handle problem of realistic size with noisy observation data. Three cases are studied to evaluate the performance of the proposed algorithm: two are based on benchmark models with priori-determined structure and parameters; the other one is a particular biological system with unknown model structure. In the last case, only a set of observation data available and in this case a nominal model is adopted for the identification. All the test systems were successfully identified by using a reasonable amount of experimental data within an acceptable computation time. Experimental evaluation reveals that the proposed method is capable of fast estimation on the unknown parameters with good precision.
NASA Astrophysics Data System (ADS)
Oliver, Todd; Ulerich, Rhys; Topalian, Victor; Malaya, Nick; Moser, Robert
2013-11-01
A discretization of the Navier-Stokes equations appropriate for efficient DNS of compressible, reacting, wall-bounded flows is developed and applied. The spatial discretization uses a Fourier-Galerkin/B-spline collocation approach. Because of the algebraic complexity of the constitutive models involved, a flux-based approach is used where the viscous terms are evaluated using repeated application of the first derivative operator. In such an approach, a filter is required to achieve appropriate dissipation at high wavenumbers. We formulate a new filter source operator based on the viscous operator. Temporal discretization is achieved using the SMR91 hybrid implicit/explicit scheme. The linear implicit operator is chosen to eliminate wall-normal acoustics from the CFL constraint while also decoupling the species equations from the remaining flow equations, which minimizes the cost of the required linear algebra. Results will be shown for a mildly supersonic, multispecies boundary layer case inspired by the flow over the ablating surface of a space capsule entering Earth's atmosphere. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].
Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki
2015-09-04
Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull-Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON).
Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta
2016-05-01
Toxigenic cyanobacteria are one of the main health risks associated with water resources worldwide, as their toxins can affect humans and fauna exposed via drinking water, aquaculture and recreation. Microscopy monitoring of cyanobacteria in water bodies and massive growth systems is a routine operation for cell abundance and growth estimation. Here we present ACQUA (Automated Cyanobacterial Quantification Algorithm), a new fully automated image analysis method designed for filamentous genera in Bright field microscopy. A pre-processing algorithm has been developed to highlight filaments of interest from background signals due to other phytoplankton and dust. A spline-fitting algorithm has been designed to recombine interrupted and crossing filaments in order to perform accurate morphometric analysis and to extract the surface pattern information of highlighted objects. In addition, 17 specific pattern indicators have been developed and used as input data for a machine-learning algorithm dedicated to the recognition between five widespread toxic or potentially toxic filamentous genera in freshwater: Aphanizomenon, Cylindrospermopsis, Dolichospermum, Limnothrix and Planktothrix. The method was validated using freshwater samples from three Italian volcanic lakes comparing automated vs. manual results. ACQUA proved to be a fast and accurate tool to rapidly assess freshwater quality and to characterize cyanobacterial assemblages in aquatic environments. PMID:27012737
Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki
2015-01-01
Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON). PMID:26404302
Uncertainty quantification in DIC with Kriging regression
NASA Astrophysics Data System (ADS)
Wang, Dezhi; DiazDelaO, F. A.; Wang, Weizhuo; Lin, Xiaoshan; Patterson, Eann A.; Mottershead, John E.
2016-03-01
A Kriging regression model is developed as a post-processing technique for the treatment of measurement uncertainty in classical subset-based Digital Image Correlation (DIC). Regression is achieved by regularising the sample-point correlation matrix using a local, subset-based, assessment of the measurement error with assumed statistical normality and based on the Sum of Squared Differences (SSD) criterion. This leads to a Kriging-regression model in the form of a Gaussian process representing uncertainty on the Kriging estimate of the measured displacement field. The method is demonstrated using numerical and experimental examples. Kriging estimates of displacement fields are shown to be in excellent agreement with 'true' values for the numerical cases and in the experimental example uncertainty quantification is carried out using the Gaussian random process that forms part of the Kriging model. The root mean square error (RMSE) on the estimated displacements is produced and standard deviations on local strain estimates are determined.
A tutorial on Bayesian Normal linear regression
NASA Astrophysics Data System (ADS)
Klauenberg, Katy; Wübbeler, Gerd; Mickan, Bodo; Harris, Peter; Elster, Clemens
2015-12-01
Regression is a common task in metrology and often applied to calibrate instruments, evaluate inter-laboratory comparisons or determine fundamental constants, for example. Yet, a regression model cannot be uniquely formulated as a measurement function, and consequently the Guide to the Expression of Uncertainty in Measurement (GUM) and its supplements are not applicable directly. Bayesian inference, however, is well suited to regression tasks, and has the advantage of accounting for additional a priori information, which typically robustifies analyses. Furthermore, it is anticipated that future revisions of the GUM shall also embrace the Bayesian view. Guidance on Bayesian inference for regression tasks is largely lacking in metrology. For linear regression models with Gaussian measurement errors this tutorial gives explicit guidance. Divided into three steps, the tutorial first illustrates how a priori knowledge, which is available from previous experiments, can be translated into prior distributions from a specific class. These prior distributions have the advantage of yielding analytical, closed form results, thus avoiding the need to apply numerical methods such as Markov Chain Monte Carlo. Secondly, formulas for the posterior results are given, explained and illustrated, and software implementations are provided. In the third step, Bayesian tools are used to assess the assumptions behind the suggested approach. These three steps (prior elicitation, posterior calculation, and robustness to prior uncertainty and model adequacy) are critical to Bayesian inference. The general guidance given here for Normal linear regression tasks is accompanied by a simple, but real-world, metrological example. The calibration of a flow device serves as a running example and illustrates the three steps. It is shown that prior knowledge from previous calibrations of the same sonic nozzle enables robust predictions even for extrapolations.
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
MLREG, stepwise multiple linear regression program
Carder, J.H.
1981-09-01
This program is written in FORTRAN for an IBM computer and performs multiple linear regressions according to a stepwise procedure. The program transforms and combines old variables into new variables, prints input and transformed data, sums, raw sums or squares, residual sum of squares, means and standard deviations, correlation coefficients, regression results at each step, ANOVA at each step, and predicted response results at each step. This package contains an EXEC used to execute the program,sample input data and output listing, source listing, documentation, and card decks containing the EXEC sample input, and FORTRAN source.
Salience Assignment for Multiple-Instance Regression
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran
2007-01-01
We present a Multiple-Instance Learning (MIL) algorithm for determining the salience of each item in each bag with respect to the bag's real-valued label. We use an alternating-projections constrained optimization approach to simultaneously learn a regression model and estimate all salience values. We evaluate this algorithm on a significant real-world problem, crop yield modeling, and demonstrate that it provides more extensive, intuitive, and stable salience models than Primary-Instance Regression, which selects a single relevant item from each bag.
Spontaneous regression of a conjunctival naevus.
Haldar, Shreya; Leyland, Martin
2016-01-01
Conjunctival naevi are one of the most common lesions affecting the conjunctiva. While benign in the vast majority of cases, the risk of malignant transformation necessitates regular follow-up. They are well known to increase in size; however, we present the first photo-documented case of spontaneous regression of conjunctival naevus. In most cases, surgical excision is performed due to the clinician's concerns over malignancy. However, a substantial proportion of patients request excision. Highlighting the potential for regression of the lesion is important to ensure patients make an informed decision when contemplating such surgery. PMID:27581234
Removing Malmquist bias from linear regressions
NASA Technical Reports Server (NTRS)
Verter, Frances
1993-01-01
Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.
Multicollinearity in cross-sectional regressions
NASA Astrophysics Data System (ADS)
Lauridsen, Jørgen; Mur, Jesùs
2006-10-01
The paper examines robustness of results from cross-sectional regression paying attention to the impact of multicollinearity. It is well known that the reliability of estimators (least-squares or maximum-likelihood) gets worse as the linear relationships between the regressors become more acute. We resolve the discussion in a spatial context, looking closely into the behaviour shown, under several unfavourable conditions, by the most outstanding misspecification tests when collinear variables are added to the regression. A Monte Carlo simulation is performed. The conclusions point to the fact that these statistics react in different ways to the problems posed.
Spontaneous hypnotic age regression: case report.
Spiegel, D; Rosenfeld, A
1984-12-01
Age regression--reliving the past as though it were occurring in the present, with age appropriate vocabulary, mental content, and affect--can occur with instruction in highly hypnotizable individuals, but has rarely been reported to occur spontaneously, especially as a primary symptom. The psychiatric presentation and treatment of a 16-year-old girl with spontaneous age regressions accessible and controllable with hypnosis and psychotherapy are described. Areas of overlap and divergence between this patient's symptoms and those found in patients with hysterical fugue and multiple personality syndrome are also discussed.
Prediction by linear regression on a quantum computer
NASA Astrophysics Data System (ADS)
Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco
2016-08-01
We give an algorithm for prediction on a quantum computer which is based on a linear regression model with least-squares optimization. In contrast to related previous contributions suffering from the problem of reading out the optimal parameters of the fit, our scheme focuses on the machine-learning task of guessing the output corresponding to a new input given examples of data points. Furthermore, we adapt the algorithm to process nonsparse data matrices that can be represented by low-rank approximations, and significantly improve the dependency on its condition number. The prediction result can be accessed through a single-qubit measurement or used for further quantum information processing routines. The algorithm's runtime is logarithmic in the dimension of the input space provided the data is given as quantum information as an input to the routine.
Parameter estimation of general regression neural network using Bayesian approach
NASA Astrophysics Data System (ADS)
Choir, Achmad Syahrul; Prasetyo, Rindang Bangun; Ulama, Brodjol Sutijo Suprih; Iriawan, Nur; Fitriasari, Kartika; Dokhi, Mohammad
2016-02-01
General Regression Neural Network (GRNN) has been applied in a large number of forecasting/prediction problem. Generally, there are two types of GRNN: GRNN which is based on kernel density; and Mixture Based GRNN (MBGRNN) which is based on adaptive mixture model. The main problem on GRNN modeling lays on how its parameters were estimated. In this paper, we propose Bayesian approach and its computation using Markov Chain Monte Carlo (MCMC) algorithms for estimating the MBGRNN parameters. This method is applied in simulation study. In this study, its performances are measured by using MAPE, MAE and RMSE. The application of Bayesian method to estimate MBGRNN parameters using MCMC is straightforward but it needs much iteration to achieve convergence.
Logistic regression when binary predictor variables are highly correlated.
Barker, L; Brown, C
Standard logistic regression can produce estimates having large mean square error when predictor variables are multicollinear. Ridge regression and principal components regression can reduce the impact of multicollinearity in ordinary least squares regression. Generalizations of these, applicable in the logistic regression framework, are alternatives to standard logistic regression. It is shown that estimates obtained via ridge and principal components logistic regression can have smaller mean square error than estimates obtained through standard logistic regression. Recommendations for choosing among standard, ridge and principal components logistic regression are developed. Published in 2001 by John Wiley & Sons, Ltd.
Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323
ConvexLAR: An Extension of Least Angle Regression*
Xiao, Wei; Zhou, Hua
2016-01-01
The least angle regression (LAR) was proposed by Efron, Hastie, Johnstone and Tibshirani (2004) for continuous model selection in linear regression. It is motivated by a geometric argument and tracks a path along which the predictors enter successively and the active predictors always maintain the same absolute correlation (angle) with the residual vector. Although it gains popularity quickly, its extensions seem rare compared to the penalty methods. In this expository article, we show that the powerful geometric idea of LAR can be generalized in a fruitful way. We propose a ConvexLAR algorithm that works for any convex loss function and naturally extends to group selection and data adaptive variable selection. After simple modification it also yields new exact path algorithms for certain penalty methods such as a convex loss function with lasso or group lasso penalty. Variable selection in recurrent event and panel count data analysis, Ada-Boost, and Gaussian graphical model is reconsidered from the ConvexLAR angle. PMID:27114697
Regional flow duration curves: Geostatistical techniques versus multivariate regression
Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.
2016-01-01
A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.
Regional flow duration curves: Geostatistical techniques versus multivariate regression
NASA Astrophysics Data System (ADS)
Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.
2016-10-01
A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.
Bootstrap inference longitudinal semiparametric regression model
NASA Astrophysics Data System (ADS)
Pane, Rahmawati; Otok, Bambang Widjanarko; Zain, Ismaini; Budiantara, I. Nyoman
2016-02-01
Semiparametric regression contains two components, i.e. parametric and nonparametric component. Semiparametric regression model is represented by yt i=μ (x˜'ti,zt i)+εt i where μ (x˜'ti,zt i)=x˜'tiβ ˜+g (zt i) and yti is response variable. It is assumed to have a linear relationship with the predictor variables x˜'ti=(x1 i 1,x2 i 2,…,xT i r) . Random error εti, i = 1, …, n, t = 1, …, T is normally distributed with zero mean and variance σ2 and g(zti) is a nonparametric component. The results of this study showed that the PLS approach on longitudinal semiparametric regression models obtain estimators β˜^t=[X'H(λ)X]-1X'H(λ )y ˜ and g˜^λ(z )=M (λ )y ˜ . The result also show that bootstrap was valid on longitudinal semiparametric regression model with g^λ(b )(z ) as nonparametric component estimator.
Assessing risk factors for periodontitis using regression
NASA Astrophysics Data System (ADS)
Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa
2013-10-01
Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.
Nodular fasciitis with degeneration and regression.
Yanagisawa, Akihiro; Okada, Hideki
2008-07-01
Nodular fasciitis is a benign reactive proliferation that is frequently misdiagnosed as a sarcoma. This article describes a case of nodular fasciitis of 6-month duration located in the cheek, which degenerated and spontaneously regressed after biopsy. The nodule was fixed to the zygoma but was free from the overlying skin. The mass was 3.0 cm in diameter and demonstrated high signal intensity on T2-weighted magnetic resonance imaging. A small part of the lesion was biopsied. Pathological and immunohistochemical examinations identified the nodule as nodular fasciitis with myxoid histology. One month after the biopsy, the mass showed decreased signal intensity on T2-weighted images and measured 2.2 cm in size. The signal on T2-weighted images showed time-dependent decreases, and the mass continued to reduce in size throughout the follow-up period. The lesion presented as hypointense to the surrounding muscles on T2-weighted images and was 0.4 cm in size at 2 years of follow-up. This case demonstrates that nodular fasciitis with myxoid histology can change to that with fibrous appearance gradually with time, thus bringing about spontaneous regression. Degeneration may be involved in the spontaneous regression of nodular fasciitis with myxoid appearance. The mechanism of regression, unclarified at present, should be further studied. PMID:18650753
A New Sample Size Formula for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…
Prediction of dynamical systems by symbolic regression.
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K; Noack, Bernd R
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast. PMID:27575130
Assumptions of Multiple Regression: Correcting Two Misconceptions
ERIC Educational Resources Information Center
Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason
2013-01-01
In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…
Commonality Analysis for the Regression Case.
ERIC Educational Resources Information Center
Murthy, Kavita
Commonality analysis is a procedure for decomposing the coefficient of determination (R superscript 2) in multiple regression analyses into the percent of variance in the dependent variable associated with each independent variable uniquely, and the proportion of explained variance associated with the common effects of predictors in various…
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
Multiple Regression Analysis and Automatic Interaction Detection.
ERIC Educational Resources Information Center
Koplyay, Janos B.
The Automatic Interaction Detector (AID) is discussed as to its usefulness in multiple regression analysis. The algorithm of AID-4 is a reversal of the model building process; it starts with the ultimate restricted model, namely, the whole group as a unit. By a unique splitting process maximizing the between sum of squares for the categories of…
Regression Segmentation for M³ Spinal Images.
Wang, Zhijie; Zhen, Xiantong; Tay, KengYeow; Osman, Said; Romano, Walter; Li, Shuo
2015-08-01
Clinical routine often requires to analyze spinal images of multiple anatomic structures in multiple anatomic planes from multiple imaging modalities (M(3)). Unfortunately, existing methods for segmenting spinal images are still limited to one specific structure, in one specific plane or from one specific modality (S(3)). In this paper, we propose a novel approach, Regression Segmentation, that is for the first time able to segment M(3) spinal images in one single unified framework. This approach formulates the segmentation task innovatively as a boundary regression problem: modeling a highly nonlinear mapping function from substantially diverse M(3) images directly to desired object boundaries. Leveraging the advancement of sparse kernel machines, regression segmentation is fulfilled by a multi-dimensional support vector regressor (MSVR) which operates in an implicit, high dimensional feature space where M(3) diversity and specificity can be systematically categorized, extracted, and handled. The proposed regression segmentation approach was thoroughly tested on images from 113 clinical subjects including both disc and vertebral structures, in both sagittal and axial planes, and from both MRI and CT modalities. The overall result reaches a high dice similarity index (DSI) 0.912 and a low boundary distance (BD) 0.928 mm. With our unified and expendable framework, an efficient clinical tool for M(3) spinal image segmentation can be easily achieved, and will substantially benefit the diagnosis and treatment of spinal diseases.
Prediction of dynamical systems by symbolic regression
NASA Astrophysics Data System (ADS)
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.
Using Regression Analysis: A Guided Tour.
ERIC Educational Resources Information Center
Shelton, Fred Ames
1987-01-01
Discusses the use and interpretation of multiple regression analysis with computer programs and presents a flow chart of the process. A general explanation of the flow chart is provided, followed by an example showing the development of a linear equation which could be used in estimating manufacturing overhead cost. (Author/LRW)
Genetic Programming Transforms in Linear Regression Situations
NASA Astrophysics Data System (ADS)
Castillo, Flor; Kordon, Arthur; Villa, Carlos
The chapter summarizes the use of Genetic Programming (GP) inMultiple Linear Regression (MLR) to address multicollinearity and Lack of Fit (LOF). The basis of the proposed method is applying appropriate input transforms (model respecification) that deal with these issues while preserving the information content of the original variables. The transforms are selected from symbolic regression models with optimal trade-off between accuracy of prediction and expressional complexity, generated by multiobjective Pareto-front GP. The chapter includes a comparative study of the GP-generated transforms with Ridge Regression, a variant of ordinary Multiple Linear Regression, which has been a useful and commonly employed approach for reducing multicollinearity. The advantages of GP-generated model respecification are clearly defined and demonstrated. Some recommendations for transforms selection are given as well. The application benefits of the proposed approach are illustrated with a real industrial application in one of the broadest empirical modeling areas in manufacturing - robust inferential sensors. The chapter contributes to increasing the awareness of the potential of GP in statistical model building by MLR.
The M Word: Multicollinearity in Multiple Regression.
ERIC Educational Resources Information Center
Morrow-Howell, Nancy
1994-01-01
Notes that existence of substantial correlation between two or more independent variables creates problems of multicollinearity in multiple regression. Discusses multicollinearity problem in social work research in which independent variables are usually intercorrelated. Clarifies problems created by multicollinearity, explains detection of…
Prediction of dynamical systems by symbolic regression.
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K; Noack, Bernd R
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.
Design Coding and Interpretation in Multiple Regression.
ERIC Educational Resources Information Center
Lunneborg, Clifford E.
The multiple regression or general linear model (GLM) is a parameter estimation and hypothesis testing model which encompasses and approaches the more familiar fixed effects analysis of variance (ANOVA). The transition from ANOVA to GLM is accomplished, roughly, by coding treatment level or group membership to produce a set of predictor or…
Predicting Social Trust with Binary Logistic Regression
ERIC Educational Resources Information Center
Adwere-Boamah, Joseph; Hufstedler, Shirley
2015-01-01
This study used binary logistic regression to predict social trust with five demographic variables from a national sample of adult individuals who participated in The General Social Survey (GSS) in 2012. The five predictor variables were respondents' highest degree earned, race, sex, general happiness and the importance of personally assisting…
Code System to Calculate Correlation & Regression Coefficients.
1999-11-23
Version 00 PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model.
Adaptive Walking in Alzheimer's Disease
Orcioli-Silva, Diego; Simieli, Lucas; Barbieri, Fabio Augusto; Stella, Florindo; Gobbi, Lilian Teresa Bucken
2012-01-01
The aim of this study is to analyze dual-task effects on free and adaptive gait in Alzheimer's disease (AD) patients. Nineteen elders with AD participated in the study. A veteran neuropsychiatrist established the degree of AD in the sample. To determine dual-task effects on free and adaptive gait, patients performed five trials for each experimental condition: free and adaptive gait with and without a dual-task (regressive countdown). Spatial and temporal parameters were collected through an optoelectronic tridimensional system. The central stride was analyzed in free gait, and the steps immediately before (approaching phase) and during the obstacle crossing were analyzed in adaptive gait. Results indicated that AD patients walked more slowly during adaptive gait and free gait, using conservative strategies when confronted either with an obstacle or a secondary task. Furthermore, patients sought for stability to perform the tasks, particularly for adaptive gait with dual task, who used anticipatory and online adjustments to perform the task. Therefore, the increase of task complexity enhances cognitive load and risk of falls for AD patients. PMID:22991684
Adaptive walking in Alzheimer's disease.
Orcioli-Silva, Diego; Simieli, Lucas; Barbieri, Fabio Augusto; Stella, Florindo; Gobbi, Lilian Teresa Bucken
2012-01-01
The aim of this study is to analyze dual-task effects on free and adaptive gait in Alzheimer's disease (AD) patients. Nineteen elders with AD participated in the study. A veteran neuropsychiatrist established the degree of AD in the sample. To determine dual-task effects on free and adaptive gait, patients performed five trials for each experimental condition: free and adaptive gait with and without a dual-task (regressive countdown). Spatial and temporal parameters were collected through an optoelectronic tridimensional system. The central stride was analyzed in free gait, and the steps immediately before (approaching phase) and during the obstacle crossing were analyzed in adaptive gait. Results indicated that AD patients walked more slowly during adaptive gait and free gait, using conservative strategies when confronted either with an obstacle or a secondary task. Furthermore, patients sought for stability to perform the tasks, particularly for adaptive gait with dual task, who used anticipatory and online adjustments to perform the task. Therefore, the increase of task complexity enhances cognitive load and risk of falls for AD patients.
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.