Sample records for multivariate spline interpolation

  1. [A correction method of baseline drift of discrete spectrum of NIR].

    PubMed

    Hu, Ai-Qin; Yuan, Hong-Fu; Song, Chun-Feng; Li, Xiao-Yu

    2014-10-01

    In the present paper, a new correction method of baseline drift of discrete spectrum is proposed by combination of cubic spline interpolation and first order derivative. A fitting spectrum is constructed by cubic spline interpolation, using the datum in discrete spectrum as interpolation nodes. The fitting spectrum is differentiable. First order derivative is applied to the fitting spectrum to calculate derivative spectrum. The spectral wavelengths which are the same as the original discrete spectrum were taken out from the derivative spectrum to constitute the first derivative spectra of the discrete spectra, thereby to correct the baseline drift of the discrete spectra. The effects of the new method were demonstrated by comparison of the performances of multivariate models built using original spectra, direct differential spectra and the spectra pretreated by the new method. The results show that negative effects on the performance of multivariate model caused by baseline drift of discrete spectra can be effectively eliminated by the new method.

  2. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  3. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  4. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  5. Monotonicity preserving splines using rational cubic Timmer interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  6. Quadratic trigonometric B-spline for image interpolation using GA

    PubMed Central

    Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906

  7. Quadratic trigonometric B-spline for image interpolation using GA.

    PubMed

    Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.

  8. Illumination estimation via thin-plate spline interpolation.

    PubMed

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  9. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  10. Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.

    PubMed

    de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2008-01-01

    The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).

  11. Multivariate Hermite interpolation on scattered point sets using tensor-product expo-rational B-splines

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter

    2011-12-01

    At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS is the simplified computation, since BFBS are piecewise polynomial, which ERBS are not. One disadvantage of using BFBS in the place of ERBS in this construction is that the necessary selection of the degree of BFBS imposes constraints on the maximal possible multiplicity of the Hermite interpolation.

  12. Interpolation by new B-splines on a four directional mesh of the plane

    NASA Astrophysics Data System (ADS)

    Nouisser, O.; Sbibih, D.

    2004-01-01

    In this paper we construct new simple and composed B-splines on the uniform four directional mesh of the plane, in order to improve the approximation order of B-splines studied in Sablonniere (in: Program on Spline Functions and the Theory of Wavelets, Proceedings and Lecture Notes, Vol. 17, University of Montreal, 1998, pp. 67-78). If φ is such a simple B-spline, we first determine the space of polynomials with maximal total degree included in , and we prove some results concerning the linear independence of the family . Next, we show that the cardinal interpolation with φ is correct and we study in S(φ) a Lagrange interpolation problem. Finally, we define composed B-splines by repeated convolution of φ with the characteristic functions of a square or a lozenge, and we give some of their properties.

  13. [An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].

    PubMed

    Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu

    2016-04-01

    The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.

  14. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  15. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  16. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads

    2015-03-01

    Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).

  17. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  18. Color management with a hammer: the B-spline fitter

    NASA Astrophysics Data System (ADS)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  19. Validating the Kinematic Wave Approach for Rapid Soil Erosion Assessment and Improved BMP Site Selection to Enhance Training Land Sustainability

    DTIC Science & Technology

    2014-02-01

    installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND

  20. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  1. Radial Basis Function Based Quadrature over Smooth Surfaces

    DTIC Science & Technology

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  2. Student Support for Research in Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.

    1999-01-01

    Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.

  3. SU-F-T-315: Comparative Studies of Planar Dose with Different Spatial Resolution for Head and Neck IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, T; Koo, T

    Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  4. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  5. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2017-01-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.

  6. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  7. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  8. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  9. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  10. Signal-to-noise ratio estimation on SEM images using cubic spline interpolation with Savitzky-Golay smoothing.

    PubMed

    Sim, K S; Kiani, M A; Nia, M E; Tso, C P

    2014-01-01

    A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  11. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  13. Enhancement of panoramic image resolution based on swift interpolation of Bezier surface

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Yang, Guo-guang; Bai, Jian

    2007-01-01

    Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.

  14. Noise and drift analysis of non-equally spaced timing data

    NASA Technical Reports Server (NTRS)

    Vernotte, F.; Zalamansky, G.; Lantz, E.

    1994-01-01

    Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.

  15. An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jin; Nelson, Karl E.

    Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.

  16. An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives

    DOE PAGES

    Yao, Jin; Nelson, Karl E.

    2018-01-24

    Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.

  17. Self-organizing map analysis using multivariate data from theophylline powders predicted by a thin-plate spline interpolation.

    PubMed

    Yasuda, Akihito; Onuki, Yoshinori; Kikuchi, Shingo; Takayama, Kozo

    2010-11-01

    The quality by design concept in pharmaceutical formulation development requires establishment of a science-based rationale and a design space. We integrated thin-plate spline (TPS) interpolation and Kohonen's self-organizing map (SOM) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline powders were prepared based on the standard formulation. The angle of repose, compressibility, cohesion, and dispersibility were measured as the response variables. These responses were predicted quantitatively on the basis of a nonlinear TPS. A large amount of data on these powders was generated and classified into several clusters using an SOM. The experimental values of the responses were predicted with high accuracy, and the data generated for the powders could be classified into several distinctive clusters. The SOM feature map allowed us to analyze the global and local correlations between causal factors and powder characteristics. For instance, the quantities of microcrystalline cellulose (MCC) and magnesium stearate (Mg-St) were classified distinctly into each cluster, indicating that the quantities of MCC and Mg-St were crucial for determining the powder characteristics. This technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline powder formulations. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  18. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    NASA Astrophysics Data System (ADS)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  19. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    NASA Technical Reports Server (NTRS)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  20. Application of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.

    1990-01-01

    A simple procedure was developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation was employed for the grid distributions.

  1. Applications of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.; Smith, Robert E.

    1990-01-01

    A simple procedure has been developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation has been employed for the grid distributions.

  2. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    PubMed

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Theory, computation, and application of exponential splines

    NASA Technical Reports Server (NTRS)

    Mccartin, B. J.

    1981-01-01

    A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.

  4. Self-organizing map analysis using multivariate data from theophylline tablets predicted by a thin-plate spline interpolation.

    PubMed

    Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Yamamoto, Rie; Takayama, Kozo

    2013-01-01

    The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and a design space. We integrated thin-plate spline (TPS) interpolation and Kohonen's self-organizing map (SOM) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared based on a standard formulation. The tensile strength, disintegration time, and stability of these variables were measured as response variables. These responses were predicted quantitatively based on nonlinear TPS. A large amount of data on these tablets was generated and classified into several clusters using an SOM. The experimental values of the responses were predicted with high accuracy, and the data generated for the tablets were classified into several distinct clusters. The SOM feature map allowed us to analyze the global and local correlations between causal factors and tablet characteristics. The results of this study suggest that increasing the proportion of microcrystalline cellulose (MCC) improved the tensile strength and the stability of tensile strength of these theophylline tablets. In addition, the proportion of MCC has an optimum value for disintegration time and stability of disintegration. Increasing the proportion of magnesium stearate extended disintegration time. Increasing the compression force improved tensile strength, but degraded the stability of disintegration. This technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulations.

  5. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  6. Landmark-based elastic registration using approximating thin-plate splines.

    PubMed

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  7. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  8. Cubic spline interpolation with overlapped window and data reuse for on-line Hilbert Huang transform biomedical microprocessor.

    PubMed

    Chang, Nai-Fu; Chiang, Cheng-Yi; Chen, Tung-Chien; Chen, Liang-Gee

    2011-01-01

    On-chip implementation of Hilbert-Huang transform (HHT) has great impact to analyze the non-linear and non-stationary biomedical signals on wearable or implantable sensors for the real-time applications. Cubic spline interpolation (CSI) consumes the most computation in HHT, and is the key component for the HHT processor. In tradition, CSI in HHT is usually performed after the collection of a large window of signals, and the long latency violates the realtime requirement of the applications. In this work, we propose to keep processing the incoming signals on-line with small and overlapped data windows without sacrificing the interpolation accuracy. 58% multiplication and 73% division of CSI are saved after the data reuse between the data windows.

  9. Neural networks for function approximation in nonlinear control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.; Stengel, Robert F.

    1990-01-01

    Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.

  10. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  11. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    PubMed

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  12. Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines

    Treesearch

    Julio L. Guardado; William T. Sommers

    1977-01-01

    The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...

  13. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    NASA Technical Reports Server (NTRS)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  14. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  15. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  16. Improved computer-aided detection of small polyps in CT colonography using interpolation for curvature estimationa

    PubMed Central

    Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.

    2011-01-01

    Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029

  17. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    NASA Astrophysics Data System (ADS)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  18. Investigations into the shape-preserving interpolants using symbolic computation

    NASA Technical Reports Server (NTRS)

    Lam, Maria

    1988-01-01

    Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.

  19. Tomography for two-dimensional gas temperature distribution based on TDLAS

    NASA Astrophysics Data System (ADS)

    Luo, Can; Wang, Yunchu; Xing, Fei

    2018-03-01

    Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.

  20. Comparing interpolation techniques for annual temperature mapping across Xinjiang region

    NASA Astrophysics Data System (ADS)

    Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang

    2016-11-01

    Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.

  1. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  2. A Comparison of Spatial Analysis Methods for the Construction of Topographic Maps of Retinal Cell Density

    PubMed Central

    Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568

  3. Spline approximation, Part 1: Basic methodology

    NASA Astrophysics Data System (ADS)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  4. Evaluation of non-rigid registration parameters for atlas-based segmentation of CT images of human cochlea

    NASA Astrophysics Data System (ADS)

    Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.

    2017-02-01

    Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.

  5. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    PubMed Central

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283

  6. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    PubMed

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  7. Use B-spline interpolation fitting baseline for low concentration 2, 6-di-tertbutyl p-cresol determination in jet fuels by differential pulse voltammetry

    NASA Astrophysics Data System (ADS)

    Wen, D. S.; Wen, H.; Shi, Y. G.; Su, B.; Li, Z. C.; Fan, G. Z.

    2018-01-01

    The B-spline interpolation fitting baseline in electrochemical analysis by differential pulse voltammetry was established for determining the lower concentration 2,6-di-tert-butyl p-cresol(BHT) in Jet Fuel that was less than 5.0 mg/L in the condition of the presence of the 6-tert-butyl-2,4-xylenol.The experimental results has shown that the relative errors are less than 2.22%, the sum of standard deviations less than 0.134mg/L, the correlation coefficient more than 0.9851. If the 2,6-ditert-butyl p-cresol concentration is higher than 5.0mg/L, linear fitting baseline method would be more applicable and simpler.

  8. Super-resolution reconstruction for 4D computed tomography of the lung via the projections onto convex sets approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yu, E-mail: yuzhang@smu.edu.cn, E-mail: qianjinfeng08@gmail.com; Wu, Xiuxiu; Yang, Wei

    2014-11-01

    Purpose: The use of 4D computed tomography (4D-CT) of the lung is important in lung cancer radiotherapy for tumor localization and treatment planning. Sometimes, dense sampling is not acquired along the superior–inferior direction. This disadvantage results in an interslice thickness that is much greater than in-plane voxel resolutions. Isotropic resolution is necessary for multiplanar display, but the commonly used interpolation operation blurs images. This paper presents a super-resolution (SR) reconstruction method to enhance 4D-CT resolution. Methods: The authors assume that the low-resolution images of different phases at the same position can be regarded as input “frames” to reconstruct high-resolution images.more » The SR technique is used to recover high-resolution images. Specifically, the Demons deformable registration algorithm is used to estimate the motion field between different “frames.” Then, the projection onto convex sets approach is implemented to reconstruct high-resolution lung images. Results: The performance of the SR algorithm is evaluated using both simulated and real datasets. Their method can generate clearer lung images and enhance image structure compared with cubic spline interpolation and back projection (BP) method. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 40.8% relative to cubic spline interpolation and 10.2% versus BP. Conclusions: A new algorithm has been developed to improve the resolution of 4D-CT. The algorithm outperforms the cubic spline interpolation and BP approaches by producing images with markedly improved structural clarity and greatly reduced artifacts.« less

  9. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  10. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  11. Weighted cubic and biharmonic splines

    NASA Astrophysics Data System (ADS)

    Kvasov, Boris; Kim, Tae-Wan

    2017-01-01

    In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.

  12. Control theory and splines, applied to signature storage

    NASA Technical Reports Server (NTRS)

    Enqvist, Per

    1994-01-01

    In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.

  13. Surface mapping of spike potential fields: experienced EEGers vs. computerized analysis.

    PubMed

    Koszer, S; Moshé, S L; Legatt, A D; Shinnar, S; Goldensohn, E S

    1996-03-01

    An EEG epileptiform spike focus recorded with scalp electrodes is clinically localized by visual estimation of the point of maximal voltage and the distribution of its surrounding voltages. We compared such estimated voltage maps, drawn by experienced electroencephalographers (EEGers), with a computerized spline interpolation technique employed in the commercially available software package FOCUS. Twenty-two spikes were recorded from 15 patients during long-term continuous EEG monitoring. Maps of voltage distribution from the 28 electrodes surrounding the points of maximum change in slope (the spike maximum) were constructed by the EEGer. The same points of maximum spike and voltage distributions at the 29 electrodes were mapped by computerized spline interpolation and a comparison between the two methods was made. The findings indicate that the computerized spline mapping techniques employed in FOCUS construct voltage maps with similar maxima and distributions as the maps created by experienced EEGers. The dynamics of spike activity, including correlations, are better visualized using the computerized technique than by manual interpretation alone. Its use as a technique for spike localization is accurate and adds information of potential clinical value.

  14. Interpolating Spherical Harmonics for Computing Antenna Patterns

    DTIC Science & Technology

    2011-07-01

    4∞. If gNF denotes the spline computed from the uniform partition of NF + 1 frequency points, the splines converge as O[N−4F ]: ‖gN − g‖∞ ≤ C0‖g(4...splines. There is the possibility of estimating the error ‖g− gNF ‖∞ even though the function g is unknown. Table 1 compares these unknown errors ‖g − gNF ...to the computable estimates ‖ gNF − g2NF ‖∞. The latter is a strong predictor of the unknown error. The triple bar is the sup-norm error over all the

  15. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  16. Transactions of The Army Conference on Applied Mathematics and Computing (5th) Held in West Point, New York on 15-18 June 1987

    DTIC Science & Technology

    1988-03-01

    29 Statistical Machine Learning for the Cognitive Selection of Nonlinear Programming Algorithms in Engineering Design Optimization Toward...interpolation and Interpolation by Box Spline Surfaces Charles K. Chui, Harvey Diamond, Louise A. Raphael. 301 Knot Selection for Least Squares...West Virginia University, Morgantown, West Virginia; and Louise Raphael, National Science Foundation, Washington, DC Knot Selection for Least

  17. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  18. Coupled B-snake grids and constrained thin-plate splines for analysis of 2-D tissue deformations from tagged MRI.

    PubMed

    Amini, A A; Chen, Y; Curwen, R W; Mani, V; Sun, J

    1998-06-01

    Magnetic resonance imaging (MRI) is unique in its ability to noninvasively and selectively alter tissue magnetization and create tagged patterns within a deforming body such as the heart muscle. The resulting patterns define a time-varying curvilinear coordinate system on the tissue, which we track with coupled B-snake grids. B-spline bases provide local control of shape, compact representation, and parametric continuity. Efficient spline warps are proposed which warp an area in the plane such that two embedded snake grids obtained from two tagged frames are brought into registration, interpolating a dense displacement vector field. The reconstructed vector field adheres to the known displacement information at the intersections, forces corresponding snakes to be warped into one another, and for all other points in the plane, where no information is available, a C1 continuous vector field is interpolated. The implementation proposed in this paper improves on our previous variational-based implementation and generalizes warp methods to include biologically relevant contiguous open curves, in addition to standard landmark points. The methods are validated with a cardiac motion simulator, in addition to in-vivo tagging data sets.

  19. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  20. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  1. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  2. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  3. NeuroMap: A Spline-Based Interactive Open-Source Software for Spatiotemporal Mapping of 2D and 3D MEA Data

    PubMed Central

    Abdoun, Oussama; Joucla, Sébastien; Mazzocco, Claire; Yvert, Blaise

    2010-01-01

    A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA) technology. Indeed, high-density MEAs provide large-scale coverage (several square millimeters) of whole neural structures combined with microscopic resolution (about 50 μm) of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid-deformation-based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License and available at http://sites.google.com/site/neuromapsoftware. PMID:21344013

  4. NeuroMap: A Spline-Based Interactive Open-Source Software for Spatiotemporal Mapping of 2D and 3D MEA Data.

    PubMed

    Abdoun, Oussama; Joucla, Sébastien; Mazzocco, Claire; Yvert, Blaise

    2011-01-01

    A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA) technology. Indeed, high-density MEAs provide large-scale coverage (several square millimeters) of whole neural structures combined with microscopic resolution (about 50 μm) of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid-deformation-based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License and available at http://sites.google.com/site/neuromapsoftware.

  5. Reliability of the Parabola Approximation Method in Heart Rate Variability Analysis Using Low-Sampling-Rate Photoplethysmography.

    PubMed

    Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol

    2017-10-24

    Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.

  6. Improving spatial resolution in skin-contact thermography: comparison between a spline based and linear interpolation.

    PubMed

    Giansanti, Daniele

    2008-07-01

    A wearable device for skin-contact thermography [Giansanti D, Maccioni G. Development and testing of a wearable integrated thermometer sensor for skin contact thermography. Med Eng Phys 2006 [ahead of print

  7. An adaptive interpolation scheme for molecular potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  8. Estimating monthly temperature using point based interpolation techniques

    NASA Astrophysics Data System (ADS)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  9. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    PubMed

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  10. Noise and Sonic Boom Impact Technology. BOOMAP2 Computer Program for Sonic Boom Research. Volume 3. Program Maintenance Manual

    DTIC Science & Technology

    1988-08-01

    the spline coefficients are calculated. 2.2.3.3 GETSEG GETSEG divides the flight into segments where the points are above the critical Mach number. The...first two and the last two points of a segment can be below critical , which is done in order to improve the spline interpolation. There can also be...subcritical points in the track; however, there can be at most only 5.5 seconds between critical points. If there is a 4.5 4 second gap between data

  11. Validation of cross-sectional time series and multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents using doubly labeled water

    USDA-ARS?s Scientific Manuscript database

    Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant cha...

  12. Synthesis of freeform refractive surfaces forming various radiation patterns using interpolation

    NASA Astrophysics Data System (ADS)

    Voznesenskaya, Anna; Mazur, Iana; Krizskiy, Pavel

    2017-09-01

    Optical freeform surfaces are very popular today in such fields as lighting systems, sensors, photovoltaic concentrators, and others. The application of such surfaces allows to obtain systems with a new quality with a reduced number of optical components to ensure high consumer characteristics: small size, weight, high optical transmittance. This article presents the methods of synthesis of refractive surface for a given source and the radiation pattern of various shapes using a computer simulation cubic spline interpolation.

  13. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.

    PubMed

    Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).

  14. Spline-based procedures for dose-finding studies with active control

    PubMed Central

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-01

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931

  15. Thin-plate spline quadrature of geodetic integrals

    NASA Technical Reports Server (NTRS)

    Vangysen, Herman

    1989-01-01

    Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.

  16. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Treesearch

    Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo

    2009-01-01

    We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...

  17. [Research on Kalman interpolation prediction model based on micro-region PM2.5 concentration].

    PubMed

    Wang, Wei; Zheng, Bin; Chen, Binlin; An, Yaoming; Jiang, Xiaoming; Li, Zhangyong

    2018-02-01

    In recent years, the pollution problem of particulate matter, especially PM2.5, is becoming more and more serious, which has attracted many people's attention from all over the world. In this paper, a Kalman prediction model combined with cubic spline interpolation is proposed, which is applied to predict the concentration of PM2.5 in the micro-regional environment of campus, and to realize interpolation simulation diagram of concentration of PM2.5 and simulate the spatial distribution of PM2.5. The experiment data are based on the environmental information monitoring system which has been set up by our laboratory. And the predicted and actual values of PM2.5 concentration data have been checked by the way of Wilcoxon signed-rank test. We find that the value of bilateral progressive significance probability was 0.527, which is much greater than the significant level α = 0.05. The mean absolute error (MEA) of Kalman prediction model was 1.8 μg/m 3 , the average relative error (MER) was 6%, and the correlation coefficient R was 0.87. Thus, the Kalman prediction model has a better effect on the prediction of concentration of PM2.5 than those of the back propagation (BP) prediction and support vector machine (SVM) prediction. In addition, with the combination of Kalman prediction model and the spline interpolation method, the spatial distribution and local pollution characteristics of PM2.5 can be simulated.

  18. Data approximation using a blending type spline construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less

  19. An adaptive interpolation scheme for molecular potential energy surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less

  20. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  1. Combining Cubic Spline Interpolation and Fast Fourier Transform to Extend Measuring Range of Reflectometry

    NASA Astrophysics Data System (ADS)

    Cheng, Ju; Lu, Jian; Zhang, Hong-Chao; Lei, Feng; Sardar, Maryam; Bian, Xin-Tian; Zuo, Fen; Shen, Zhong-Hua; Ni, Xiao-Wu; Shi, Jin

    2018-05-01

    Not Available Supported by the National Natural Science Foundation of China under Grant No 11604115, the Educational Commissionof Jiangsu Province of China under Grant No 17KJA460004, and the Huaian Science and Technology Funds under Grant NoHAC201701.

  2. Dynamic graphs, community detection, and Riemannian geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun

    A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less

  3. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  4. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images

    PubMed Central

    Wang, Yangping; Wang, Song

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653

  5. Modeling respiratory mechanics in the MCAT and spline-based MCAT phantoms

    NASA Astrophysics Data System (ADS)

    Segars, W. P.; Lalush, D. S.; Tsui, B. M. W.

    2001-02-01

    Respiratory motion can cause artifacts in myocardial SPECT and computed tomography (CT). The authors incorporate models of respiratory mechanics into the current 4D MCAT and into the next generation spline-based MCAT phantoms. In order to simulate respiratory motion in the current MCAT phantom, the geometric solids for the diaphragm, heart, ribs, and lungs were altered through manipulation of parameters defining them. Affine transformations were applied to the control points defining the same respiratory structures in the spline-based MCAT phantom to simulate respiratory motion. The Non-Uniform Rational B-Spline (NURBS) surfaces for the lungs and body outline were constructed in such a way as to be linked to the surrounding ribs. Expansion and contraction of the thoracic cage then coincided with expansion and contraction of the lungs and body. The changes both phantoms underwent were spline-interpolated over time to create time continuous 4D respiratory models. The authors then used the geometry-based and spline-based MCAT phantoms in an initial simulation study of the effects of respiratory motion on myocardial SPECT. The simulated reconstructed images demonstrated distinct artifacts in the inferior region of the myocardium. It is concluded that both respiratory models can be effective tools for researching effects of respiratory motion.

  6. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    PubMed Central

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  7. Incorporating Linear Synchronous Transit Interpolation into the Growing String Method: Algorithm and Applications.

    PubMed

    Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin

    2011-12-13

    The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).

  8. Multivariate spline methods in surface fitting

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator); Schumaker, L. L.

    1984-01-01

    The use of spline functions in the development of classification algorithms is examined. In particular, a method is formulated for producing spline approximations to bivariate density functions where the density function is decribed by a histogram of measurements. The resulting approximations are then incorporated into a Bayesiaan classification procedure for which the Bayes decision regions and the probability of misclassification is readily computed. Some preliminary numerical results are presented to illustrate the method.

  9. Off-line data reduction

    NASA Astrophysics Data System (ADS)

    Gutowski, Marek W.

    1992-12-01

    Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).

  10. BasinVis 1.0: A MATLAB®-based program for sedimentary basin subsidence analysis and visualization

    NASA Astrophysics Data System (ADS)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2016-06-01

    Stratigraphic and structural mapping is important to understand the internal structure of sedimentary basins. Subsidence analysis provides significant insights for basin evolution. We designed a new software package to process and visualize stratigraphic setting and subsidence evolution of sedimentary basins from well data. BasinVis 1.0 is implemented in MATLAB®, a multi-paradigm numerical computing environment, and employs two numerical methods: interpolation and subsidence analysis. Five different interpolation methods (linear, natural, cubic spline, Kriging, and thin-plate spline) are provided in this program for surface modeling. The subsidence analysis consists of decompaction and backstripping techniques. BasinVis 1.0 incorporates five main processing steps; (1) setup (study area and stratigraphic units), (2) loading well data, (3) stratigraphic setting visualization, (4) subsidence parameter input, and (5) subsidence analysis and visualization. For in-depth analysis, our software provides cross-section and dip-slip fault backstripping tools. The graphical user interface guides users through the workflow and provides tools to analyze and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the results using the full range of available plot options in MATLAB. We demonstrate all functions in a case study of Miocene sediment in the central Vienna Basin.

  11. Multivariate Spline Algorithms for CAGD

    NASA Technical Reports Server (NTRS)

    Boehm, W.

    1985-01-01

    Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.

  12. Seismic Propagation in the Kuriles/Kamchatka Region

    DTIC Science & Technology

    1980-07-25

    model the final profile is well-represented by a spline interpolation. Figure 7 shows the sampling grid used to input velocity perturbations due to the...A modification of Cagniard’s method for s~ lving seismic pulse problems, Appl. Sci. Res. B., 8, p. 349, 1960. Fuchs, K. and G. Muller, Computation of

  13. Accuracy of parameterized proton range models; A comparison

    NASA Astrophysics Data System (ADS)

    Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.

    2018-03-01

    An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.

  14. How to detect and reduce movement artifacts in near-infrared imaging using moving standard deviation and spline interpolation.

    PubMed

    Scholkmann, F; Spichtig, S; Muehlemann, T; Wolf, M

    2010-05-01

    Near-infrared imaging (NIRI) is a neuroimaging technique which enables us to non-invasively measure hemodynamic changes in the human brain. Since the technique is very sensitive, the movement of a subject can cause movement artifacts (MAs), which affect the signal quality and results to a high degree. No general method is yet available to reduce these MAs effectively. The aim was to develop a new MA reduction method. A method based on moving standard deviation and spline interpolation was developed. It enables the semi-automatic detection and reduction of MAs in the data. It was validated using simulated and real NIRI signals. The results show that a significant reduction of MAs and an increase in signal quality are achieved. The effectiveness and usability of the method is demonstrated by the improved detection of evoked hemodynamic responses. The present method can not only be used in the postprocessing of NIRI signals but also for other kinds of data containing artifacts, for example ECG or EEG signals.

  15. Algebraic grid generation using tensor product B-splines. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Saunders, B. V.

    1985-01-01

    Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.

  16. Spatial variability of soil carbon, pH, available phosphorous and potassium in organic farm located in Mediterranean Croatia

    NASA Astrophysics Data System (ADS)

    Bogunović, Igor; Pereira, Paulo; Šeput, Miranda

    2016-04-01

    Soil organic carbon (SOC), pH, available phosphorus (P), and potassium (K) are some of the most important factors to soil fertility. These soil parameters are highly variable in space and time, with implications to crop production. The aim of this work is study the spatial variability of SOC, pH, P and K in an organic farm located in river Rasa valley (Croatia). A regular grid (100 x 100 m) was designed and 182 samples were collected on Silty Clay Loam soil. P, K and SOC showed moderate heterogeneity with coefficient of variation (CV) of 21.6%, 32.8% and 51.9%, respectively. Soil pH record low spatial variability with CV of 1.5%. Soil pH, P and SOC did not follow normal distribution. Only after a Box-Cox transformation, data respected the normality requirements. Directional exponential models were the best fitted and used to describe spatial autocorrelation. Soil pH, P and SOC showed strong spatial dependence with nugget to sill ratio with 13.78%, 0.00% and 20.29%, respectively. Only K recorded moderate spatial dependence. Semivariogram ranges indicate that future sampling interval could be 150 - 200 m in order to reduce sampling costs. Fourteen different interpolation models for mapping soil properties were tested. The method with lowest Root Mean Square Error was the most appropriated to map the variable. The results showed that radial basis function models (Spline with Tension and Completely Regularized Spline) for P and K were the best predictors, while Thin Plate Spline and inverse distance weighting models were the least accurate. The best interpolator for pH and SOC was the local polynomial with the power of 1, while the least accurate were Thin Plate Spline. According to soil nutrient maps investigated area record very rich supply with K while P supply was insufficient on largest part of area. Soil pH maps showed mostly neutral reaction while individual parts of alkaline soil indicate the possibility of penetration of seawater and salt accumulation in the soil profile. Future research should focus on spatial patterns on soil pH, electrical conductivity and sodium adsorption ratio. Keywords: geostatistics, semivariogram, interpolation models, soil chemical properties

  17. Tungsten anode spectral model using interpolating cubic splines: unfiltered x-ray spectra from 20 kV to 640 kV.

    PubMed

    Hernandez, Andrew M; Boone, John M

    2014-04-01

    Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R(2)) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, "Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector," Phys. Med. Biol. 24, 505-517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB). © 2014 American Association of Physicists in Medicine.

  18. Tungsten anode spectral model using interpolating cubic splines: Unfiltered x-ray spectra from 20 kV to 640 kV

    PubMed Central

    Hernandez, Andrew M.; Boone, John M.

    2014-01-01

    Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using paired t-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R2) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Conclusions: Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB). PMID:24694149

  19. Tungsten anode spectral model using interpolating cubic splines: Unfiltered x-ray spectra from 20 kV to 640 kV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, Andrew M.; Boone, John M., E-mail: john.boone@ucdmc.ucdavis.edu

    Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervalsmore » from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R{sup 2}) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Conclusions: Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB)« less

  20. Effect of data gaps on correlation dimension computed from light curves of variable stars

    NASA Astrophysics Data System (ADS)

    George, Sandip V.; Ambika, G.; Misra, R.

    2015-11-01

    Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the Rössler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D2 value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D2 values.

  1. Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea

    NASA Astrophysics Data System (ADS)

    Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan

    2016-04-01

    Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.

  2. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  4. Use of shape-preserving interpolation methods in surface modeling

    NASA Technical Reports Server (NTRS)

    Ftitsch, F. N.

    1984-01-01

    In many large-scale scientific computations, it is necessary to use surface models based on information provided at only a finite number of points (rather than determined everywhere via an analytic formula). As an example, an equation of state (EOS) table may provide values of pressure as a function of temperature and density for a particular material. These values, while known quite accurately, are typically known only on a rectangular (but generally quite nonuniform) mesh in (T,d)-space. Thus interpolation methods are necessary to completely determine the EOS surface. The most primitive EOS interpolation scheme is bilinear interpolation. This has the advantages of depending only on local information, so that changes in data remote from a mesh element have no effect on the surface over the element, and of preserving shape information, such as monotonicity. Most scientific calculations, however, require greater smoothness. Standard higher-order interpolation schemes, such as Coons patches or bicubic splines, while providing the requisite smoothness, tend to produce surfaces that are not physically reasonable. This means that the interpolant may have bumps or wiggles that are not supported by the data. The mathematical quantification of ideas such as physically reasonable and visually pleasing is examined.

  5. Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan

    NASA Astrophysics Data System (ADS)

    Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung

    2010-08-01

    Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.

  6. Calibration method of microgrid polarimeters with image interpolation.

    PubMed

    Chen, Zhenyue; Wang, Xia; Liang, Rongguang

    2015-02-10

    Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

  7. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    PubMed

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, Stuart R.

    In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less

  9. Optimization and comparison of three spatial interpolation methods for electromagnetic levels in the AM band within an urban area.

    PubMed

    Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio

    2018-04-01

    A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  11. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  12. Applying Multivariate Adaptive Splines to Identify Genes With Expressions Varying After Diagnosis in Microarray Experiments.

    PubMed

    Duan, Fenghai; Xu, Ye

    2017-01-01

    To analyze a microarray experiment to identify the genes with expressions varying after the diagnosis of breast cancer. A total of 44 928 probe sets in an Affymetrix microarray data publicly available on Gene Expression Omnibus from 249 patients with breast cancer were analyzed by the nonparametric multivariate adaptive splines. Then, the identified genes with turning points were grouped by K-means clustering, and their network relationship was subsequently analyzed by the Ingenuity Pathway Analysis. In total, 1640 probe sets (genes) were reliably identified to have turning points along with the age at diagnosis in their expression profiling, of which 927 expressed lower after turning points and 713 expressed higher after the turning points. K-means clustered them into 3 groups with turning points centering at 54, 62.5, and 72, respectively. The pathway analysis showed that the identified genes were actively involved in various cancer-related functions or networks. In this article, we applied the nonparametric multivariate adaptive splines method to a publicly available gene expression data and successfully identified genes with expressions varying before and after breast cancer diagnosis.

  13. [Spatial distribution prediction of surface soil Pb in a battery contaminated site].

    PubMed

    Liu, Geng; Niu, Jun-Jie; Zhang, Chao; Zhao, Xin; Guo, Guan-Lin

    2014-12-01

    In order to enhance the reliability of risk estimation and to improve the accuracy of pollution scope determination in a battery contaminated site with the soil characteristic pollutant Pb, four spatial interpolation models, including Combination Prediction Model (OK(LG) + TIN), kriging model (OK(BC)), Inverse Distance Weighting model (IDW), and Spline model were employed to compare their effects on the spatial distribution and pollution assessment of soil Pb. The results showed that Pb concentration varied significantly and the data was severely skewed. The variation coefficient of the site was higher in the local region. OK(LG) + TIN was found to be more accurate than the other three models in predicting the actual pollution situations of the contaminated site. The prediction accuracy of other models was lower, due to the effect of the principle of different models and datum feature. The interpolation results of OK(BC), IDW and Spline could not reflect the detailed characteristics of seriously contaminated areas, and were not suitable for mapping and spatial distribution prediction of soil Pb in this site. This study gives great contributions and provides useful references for defining the remediation boundary and making remediation decision of contaminated sites.

  14. Noise correction on LANDSAT images using a spline-like algorithm

    NASA Technical Reports Server (NTRS)

    Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.

    1985-01-01

    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.

  15. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  17. Fuzzy topological digital space and digital fuzzy spline of electroencephalography during epileptic seizures

    NASA Astrophysics Data System (ADS)

    Shah, Mazlina Muzafar; Wahab, Abdul Fatah

    2017-08-01

    Epilepsy disease occurs because of there is a temporary electrical disturbance in a group of brain cells (nurons). The recording of electrical signals come from the human brain which can be collected from the scalp of the head is called Electroencephalography (EEG). EEG then considered in digital format and in fuzzy form makes it a fuzzy digital space data form. The purpose of research is to identify the area (curve and surface) in fuzzy digital space affected by inside epilepsy seizure in epileptic patient's brain. The main focus for this research is to generalize fuzzy topological digital space, definition and basic operation also the properties by using digital fuzzy set and the operations. By using fuzzy digital space, the theory of digital fuzzy spline can be introduced to replace grid data that has been use previously to get better result. As a result, the flat of EEG can be fuzzy topological digital space and this type of data can be use to interpolate the digital fuzzy spline.

  18. Direct numerical simulation of incompressible axisymmetric flows

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick

    1994-01-01

    In the present work, we propose to conduct direct numerical simulations (DNS) of incompressible turbulent axisymmetric jets and wakes. The objectives of the study are to understand the fundamental behavior of axisymmetric jets and wakes, which are perhaps the most technologically relevant free shear flows (e.g. combuster injectors, propulsion jet). Among the data to be generated are various statistical quantities of importance in turbulence modeling, like the mean velocity, turbulent stresses, and all the terms in the Reynolds-stress balance equations. In addition, we will be interested in the evolution of large-scale structures that are common in free shear flow. The axisymmetric jet or wake is also a good problem in which to try the newly developed b-spline numerical method. Using b-splines as interpolating functions in the non-periodic direction offers many advantages. B-splines have local support, which leads to sparse matrices that can be efficiently stored and solved. Also, they offer spectral-like accuracy that are C(exp O-1) continuous, where O is the order of the spline used; this means that derivatives of the velocity such as the vorticity are smoothly and accurately represented. For purposes of validation against existing results, the present code will also be able to simulate internal flows (ones that require a no-slip boundary condition). Implementation of no-slip boundary condition is trivial in the context of the b-splines.

  19. Multivariate Epi-splines and Evolving Function Identification Problems

    DTIC Science & Technology

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  20. Fine-granularity inference and estimations to network traffic for SDN.

    PubMed

    Jiang, Dingde; Huo, Liuwei; Li, Ya

    2018-01-01

    An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.

  1. Fine-granularity inference and estimations to network traffic for SDN

    PubMed Central

    Huo, Liuwei; Li, Ya

    2018-01-01

    An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. PMID:29718913

  2. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  3. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  4. Mesh-free data transfer algorithms for partitioned multiphysics problems: Conservation, accuracy, and parallelism

    DOE PAGES

    Slattery, Stuart R.

    2015-12-02

    In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less

  5. An Interpolation Approach to Optimal Trajectory Planning for Helicopter Unmanned Aerial Vehicles

    DTIC Science & Technology

    2012-06-01

    Armament Data Line DOF Degree of Freedom PS Pseudospectral LGL Legendre -Gauss-Lobatto quadrature nodes ODE Ordinary Differential Equation xiv...low order polynomials patched together in such away so that the resulting trajectory has several continuous derivatives at all points. In [7], Murray...claims that splines are ideal for optimal control problems because each segment of the spline’s piecewise polynomials approximate the trajectory

  6. Liquid Annular Seal Research

    NASA Technical Reports Server (NTRS)

    Palazzolo, Alan B.; Venkataraman, Balaji; Padavala, Sathya S.; Ryan, Steve; Vallely, Pat; Funston, Kerry

    1996-01-01

    This paper highlights the accomplishments on a joint effort between NASA - Marshall Space Flight Center and Texas A and M University to develop accurate seal analysis software for use in rocket turbopump design, design audits and trouble shooting. Results for arbitrary clearance profile, transient simulation, thermal effects solution and flexible seal wall model are presented. A new solution for eccentric seals based on cubic spline interpolation and ordinary differential equation integration is also presented.

  7. Experimental comparison of landmark-based methods for 3D elastic registration of pre- and postoperative liver CT data

    NASA Astrophysics Data System (ADS)

    Lange, Thomas; Wörz, Stefan; Rohr, Karl; Schlag, Peter M.

    2009-02-01

    The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate surgical procedures, in particular, if computer assisted planning and/or navigation is performed. Due to deformations after surgery, partially caused by the removal of tissue, a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach, if high accuracy and reliability is difficult to achieve by automatic registration approaches. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using branching landmarks for registration, we here introduce quasi landmarks at vessel segments with high localization precision perpendicular to the vessels and low precision along the vessels. A comparison of interpolating thin-plate splines (TPS), interpolating Gaussian elastic body splines (GEBS) and approximating GEBS on landmarks at vessel branchings as well as approximating GEBS on the introduced vessel segment landmarks is performed. It turns out that the segment landmarks provide registration accuracies as good as branching landmarks and can improve accuracy if combined with branching landmarks. For a low number of landmarks segment landmarks are even superior.

  8. General-Purpose Software For Computer Graphics

    NASA Technical Reports Server (NTRS)

    Rogers, Joseph E.

    1992-01-01

    NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.

  9. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    PubMed

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  10. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    PubMed Central

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  11. Shape Control in Multivariate Barycentric Rational Interpolation

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa Thang; Cuyt, Annie; Celis, Oliver Salazar

    2010-09-01

    The most stable formula for a rational interpolant for use on a finite interval is the barycentric form [1, 2]. A simple choice of the barycentric weights ensures the absence of (unwanted) poles on the real line [3]. In [4] we indicate that a more refined choice of the weights in barycentric rational interpolation can guarantee comonotonicity and coconvexity of the rational interpolant in addition to a polefree region of interest. In this presentation we generalize the above to the multivariate case. We use a product-like form of univariate barycentric rational interpolants and indicate how the location of the poles and the shape of the function can be controlled. This functionality is of importance in the construction of mathematical models that need to express a certain trend, such as in probability distributions, economics, population dynamics, tumor growth models etc.

  12. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  13. Geographic patterns and dynamics of Alaskan climate interpolated from a sparse station record

    USGS Publications Warehouse

    Fleming, Michael D.; Chapin, F. Stuart; Cramer, W.; Hufford, Gary L.; Serreze, Mark C.

    2000-01-01

    Data from a sparse network of climate stations in Alaska were interpolated to provide 1-km resolution maps of mean monthly temperature and precipitation-variables that are required at high spatial resolution for input into regional models of ecological processes and resource management. The interpolation model is based on thin-plate smoothing splines, which uses the spatial data along with a digital elevation model to incorporate local topography. The model provides maps that are consistent with regional climatology and with patterns recognized by experienced weather forecasters. The broad patterns of Alaskan climate are well represented and include latitudinal and altitudinal trends in temperature and precipitation and gradients in continentality. Variations within these broad patterns reflect both the weakening and reduction in frequency of low-pressure centres in their eastward movement across southern Alaska during the summer, and the shift of the storm tracks into central and northern Alaska in late summer. Not surprisingly, apparent artifacts of the interpolated climate occur primarily in regions with few or no stations. The interpolation model did not accurately represent low-level winter temperature inversions that occur within large valleys and basins. Along with well-recognized climate patterns, the model captures local topographic effects that would not be depicted using standard interpolation techniques. This suggests that similar procedures could be used to generate high-resolution maps for other high-latitude regions with a sparse density of data.

  14. "Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li+-benzene

    NASA Astrophysics Data System (ADS)

    D'Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.

    2015-08-01

    Quantum and anharmonic effects are investigated in (H2)2-Li+-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li+-benzene complex increases the ZPE of the system by 5.6 kJ mol-1 to 17.6 kJ mol-1. This ZPE is 42% of the total electronic binding energy of (H2)2-Li+-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li+-benzene is 7.7 kJ mol-1, compared to 12.4 kJ mol-1 for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li+ ion and are more confined in the θ coordinate than in H2-Li+-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li+-benzene PESs are developed. These use a modified Shepard interpolation for the Li+-benzene and H2-Li+-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li+ terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol-1. Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol-1 error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H2-H2 interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.

  15. "Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li(+)-benzene.

    PubMed

    D'Arcy, Jordan H; Kolmann, Stephen J; Jordan, Meredith J T

    2015-08-21

    Quantum and anharmonic effects are investigated in (H2)2-Li(+)-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li(+)-benzene complex increases the ZPE of the system by 5.6 kJ mol(-1) to 17.6 kJ mol(-1). This ZPE is 42% of the total electronic binding energy of (H2)2-Li(+)-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li(+)-benzene is 7.7 kJ mol(-1), compared to 12.4 kJ mol(-1) for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li(+) ion and are more confined in the θ coordinate than in H2-Li(+)-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li(+)-benzene PESs are developed. These use a modified Shepard interpolation for the Li(+)-benzene and H2-Li(+)-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li(+) terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol(-1). Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol(-1) error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H2-H2 interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.

  16. Integrating bathymetric and topographic data

    NASA Astrophysics Data System (ADS)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  17. A robust method of thin plate spline and its application to DEM construction

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  18. On the feasibility to integrate low-cost MEMS accelerometers and GNSS receivers

    NASA Astrophysics Data System (ADS)

    Benedetti, Elisa; Dermanis, Athanasios; Crespi, Mattia

    2017-06-01

    The aim of this research was to investigate the feasibility of merging the benefits offered by low-cost GNSS and MEMS accelerometers technology, in order to promote the diffusion of low-cost monitoring solutions. A merging approach was set up at the level of the combination of kinematic results (velocities and displacements) coming from the two kinds of sensors, whose observations were separately processed, following to the so called loose integration, which sounds much more simple and flexible thinking about the possibility of an easy change of the combined sensors. At first, the issues related to the difference in reference systems, time systems and measurement rate and epochs for the two sensors were faced with. An approach was designed and tested to transform into unique reference and time systems the outcomes from GPS and MEMS and to interpolate the usually (much) more dense MEMS observation to common (GPS) epochs. The proposed approach was limited to time-independent (constant) orientation of the MEMS reference system with respect to the GPS one. Then, a data fusion approach based on the use of Discrete Fourier Transform and cubic splines interpolation was proposed both for velocities and displacements: MEMS and GPS derived solutions are firstly separated by a rectangular filter in spectral domain, and secondly back-transformed and combined through a cubic spline interpolation. Accuracies around 5 mm for slow and fast displacements and better than 2 mm/s for velocities were assessed. The obtained solution paves the way to a powerful and appealing use of low-cost single frequency GNSS receivers and MEMS accelerometers for structural and ground monitoring applications. Some additional remarks and prospects for future investigations complete the paper.

  19. Mapping wildfire effects on Ca2+ and Mg2+ released from ash. A microplot analisis.

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Úbeda, Xavier; Martin, Deborah

    2010-05-01

    Wildland fires have important implications in ecosystems dynamic. Their effects depends on many biophysical components, mainly burned specie, ecosystem affected, amount and spatial distribution of the fuel, relative humidity, slope, aspect and time of residence. These parameters are heterogenic across the landscape, producing a complex mosaic of severities. Wildland fires have a heterogenic impact on ecosystems due their diverse biophysical features. It is widely known that fire impacts can change rapidly even in short distances, producing at microplot scale highly spatial variation. Also after a fire, the most visible thing is ash and his physical and chemical properties are of main importance because here reside the majority of the available nutrients available to the plants. Considering this idea, is of major importance, study their characteristics in order to observe the type and amount of elements available to plants. This study is focused on the study of the spatial variability of two nutrients essential to plant growth, Ca2+ and Mg2+, released from ash after a wildfire at microplot scale. The impacts of fire are highly variable even small distances. This creates many problems at the hour of map the effects of fire in the release of the studied elements. Hence is of major priority identify the less biased interpolation method in order to predict with great accuracy the variable in study. The aim of this study is map the effects of wildfire on the referred elements released from ash at microplot scale, testing several interpolation methods. 16 interpolation techniques were tested, Inverse Distance to a Weight (IDW), with the with the weights of 1,2, 3, 4 and 5, Local Polynomial, with the power of 1 (LP1) and 2 (LP2), Polynomial Regression (PR), Radial Basis Functions, especially, Spline With Tension (SPT), Completely Regularized Spline (CRS), Multiquadratic (MTQ), Inverse Multiquadratic (MTQ), and Thin Plate Spline (TPS). Also geostatistical methods were tested from Kriging family, mainly Ordinary Kriging (OK), Simple Kriging (SK) and Universal Kriging (UK). Interpolation techniques were assessed throughout the Mean Error (ME) and Root Mean Square (RMSE), obtained from the cross validation procedure calculated in all methods. The fire occurred in Portugal, near an urban area and inside the affected area we designed a grid with the dimensions of 9 x 27 m and we collected 40 samples. Before modelling data, we tested their normality with the Shapiro Wilk test. Since the distributions of Ca2+ and Mg2+ did not respect the gaussian distribution we transformed data logarithmically (Ln). With this transformation, data respect the normality and spatial distribution was modelled with the transformed data. On average in the entire plot the ash slurries contained 4371.01 mg/l of Ca2+, however with a higher coefficient of variation (CV%) of 54.05%. From all the tested methods LP1 was the less biased and hence the most accurate to interpolate this element. The most biased was LP2. In relation to Mg2+, considering the entire plot, the ash released in solution on average 1196.01 mg/l, with a CV% of 52.36%, similar to the identified in Ca2+. The best interpolator in this case was SK and the most biased was LP1 and TPS. Comparing all methods in both elements, the quality of the interpolations was higher in Ca2+. These results allowed us to conclude that to achieve the best prediction it is necessary test a wide range of interpolation methods. The best accuracy will permit us to understand with more precision where the studied elements are more available and accessible to plant growth and ecosystem recovers. This spatial pattern of both nutrients is related with ash pH and burned severity evaluated from ash colour and CaCO3 content. These aspects will be also discussed in the work.

  20. Landmark-Based 3D Elastic Registration of Pre- and Postoperative Liver CT Data

    NASA Astrophysics Data System (ADS)

    Lange, Thomas; Wörz, Stefan; Rohr, Karl; Schlag, Peter M.

    The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate computer assisted surgical procedures. Due to deformations after surgery a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using landmarks at vessel branchings, we here introduce quasi landmarks at vessel segments with anisotropic localization precision. An experimental comparison of interpolating thin-plate splines (TPS) and Gaussian elastic body splines (GEBS) as well as approximating GEBS on both types of landmarks is performed.

  1. Multivariate adaptive regression splines analysis to predict biomarkers of spontaneous preterm birth.

    PubMed

    Menon, Ramkumar; Bhat, Geeta; Saade, George R; Spratt, Heidi

    2014-04-01

    To develop classification models of demographic/clinical factors and biomarker data from spontaneous preterm birth in African Americans and Caucasians. Secondary analysis of biomarker data using multivariate adaptive regression splines (MARS), a supervised machine learning algorithm method. Analysis of data on 36 biomarkers from 191 women was reduced by MARS to develop predictive models for preterm birth in African Americans and Caucasians. Maternal plasma, cord plasma collected at admission for preterm or term labor and amniotic fluid at delivery. Data were partitioned into training and testing sets. Variable importance, a relative indicator (0-100%) and area under the receiver operating characteristic curve (AUC) characterized results. Multivariate adaptive regression splines generated models for combined and racially stratified biomarker data. Clinical and demographic data did not contribute to the model. Racial stratification of data produced distinct models in all three compartments. In African Americans maternal plasma samples IL-1RA, TNF-α, angiopoietin 2, TNFRI, IL-5, MIP1α, IL-1β and TGF-α modeled preterm birth (AUC train: 0.98, AUC test: 0.86). In Caucasians TNFR1, ICAM-1 and IL-1RA contributed to the model (AUC train: 0.84, AUC test: 0.68). African Americans cord plasma samples produced IL-12P70, IL-8 (AUC train: 0.82, AUC test: 0.66). Cord plasma in Caucasians modeled IGFII, PDGFBB, TGF-β1 , IL-12P70, and TIMP1 (AUC train: 0.99, AUC test: 0.82). Amniotic fluid in African Americans modeled FasL, TNFRII, RANTES, KGF, IGFI (AUC train: 0.95, AUC test: 0.89) and in Caucasians, TNF-α, MCP3, TGF-β3 , TNFR1 and angiopoietin 2 (AUC train: 0.94 AUC test: 0.79). Multivariate adaptive regression splines models multiple biomarkers associated with preterm birth and demonstrated racial disparity. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.

  2. Simulation of Pellet Ablation

    NASA Astrophysics Data System (ADS)

    Parks, P. B.; Ishizaki, Ryuichi

    2000-10-01

    In order to clarify the structure of the ablation flow, 2D simulation is carried out with a fluid code solving temporal evolution of MHD equations. The code includes electrostatic sheath effect at the cloud interface.(P.B. Parks et al.), Plasma Phys. Contr. Fusion 38, 571 (1996). An Eulerian cylindrical coordinate system (r,z) is used with z in a spherical pellet. The code uses the Cubic-Interpolated Psudoparticle (CIP) method(H. Takewaki and T. Yabe, J. Comput. Phys. 70), 355 (1987). that divides the fluid equations into non-advection and advection phases. The most essential element of the CIP method is in calculation of the advection phase. In this phase, a cubic interpolated spatial profile is shifted in space according to the total derivative equations, similarly to a particle scheme. Since the profile is interpolated by using the value and the spatial derivative value at each grid point, there is no numerical oscillation in space, that often appears in conventional spline interpolation. A free boundary condition is used in the code. The possibility of a stationary shock will also be shown in the presentation because the supersonic ablation flow across the magnetic field is impeded.

  3. Spatial variability of soil available phosphorous and potassium at three different soils located in Pannonian Croatia

    NASA Astrophysics Data System (ADS)

    Bogunović, Igor; Pereira, Paulo; Đurđević, Boris

    2017-04-01

    Information on spatial distribution of soil nutrients in agroecosystems is critical for improving productivity and reducing environmental pressures in intensive farmed soils. In this context, spatial prediction of soil properties should be accurate. In this study we analyse 704 data of soil available phosphorus (AP) and potassium (AK); the data derive from soil samples collected across three arable fields in Baranja region (Croatia) in correspondence of different soil types: Cambisols (169 samples), Chernozems (131 samples) and Gleysoils (404 samples). The samples are collected in a regular sampling grid (distance 225 x 225 m). Several geostatistical techniques (Inverse Distance to a Weight (IDW) with the power of 1, 2 and 3; Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multiquadratic (MTQ), Completely Regularized Spline (CRS), Spline with Tension (SPT) and Thin Plate Spline (TPS); and Local Polynomial (LP) with the power of 1 and 2; two geostatistical techniques -Ordinary Kriging - OK and Simple Kriging - SK) were tested in order to evaluate the most accurate spatial variability maps using criteria of lowest RMSE during cross validation technique. Soil parameters varied considerably throughout the studied fields and their coefficient of variations ranged from 31.4% to 37.7% and from 19.3% to 27.1% for soil AP and AK, respectively. The experimental variograms indicate a moderate spatial dependence for AP and strong spatial dependence for all three locations. The best spatial predictor for AP at Chernozem field was Simple kriging (RMSE=61.711), and for AK inverse multiquadratic (RMSE=44.689). The least accurate technique was Thin plate spline (AP) and Inverse distance to a weight with a power of 1 (AK). Radial basis function models (Spline with Tension for AP at Gleysoil and Cambisol and Completely Regularized Spline for AK at Gleysol) were the best predictors, while Thin Plate Spline models were the least accurate in all three cases. The best interpolator for AK at Cambisol was the local polynomial with the power of 2 (RMSE=33.943), while the least accurate was Thin Plate Spline (RMSE=39.572).

  4. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 2: User's manual and program listing

    NASA Technical Reports Server (NTRS)

    Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.

  5. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.

  6. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  7. Evaluation of Early and Prolonged Effects of Acute Neurotoxicity and Neuroprotection Using Novel Functional Imaging Techniques

    DTIC Science & Technology

    2004-08-01

    Mutual Information (NMI) voxel match algorithm of the ANALYZE software package and cubic spline interpolation (Brownell et al. 2003, Appendix). 2...nuclear inclusion and cell survival. Materials and Methods Animals: Male transgenic R6/2 mice, which depict many clinical features of juvenile HD were...purchased from the Jackson Laboratories (Bar Harbor, ME). The mice were housed 3-4 per cage under standard conditions with free access to food and water

  8. Applied Computational Electromagnetics Society Journal, volume 9, number 1, March 1994

    NASA Astrophysics Data System (ADS)

    1994-03-01

    The partial contents of this document include the following: On the Use of Bivariate Spline Interpolation of Slot Data in the Design of Slotted Waveguide Arrays; A Technique for Determining Non-Integer Eigenvalues for Solutions of Ordinary Differential Equations; Antenna Modeling and Characterization of a VLF Airborne Dual Trailing Wire Antenna System; Electromagnetic Scattering from Two-Dimensional Composite Objects; and Use of a Stealth Boundary with Finite Difference Frequency Domain Simulations of Simple Antenna Problems.

  9. The Control Based on Internal Average Kinetic Energy in Complex Environment for Multi-robot System

    NASA Astrophysics Data System (ADS)

    Yang, Mao; Tian, Yantao; Yin, Xianghua

    In this paper, reference trajectory is designed according to minimum energy consumed for multi-robot system, which nonlinear programming and cubic spline interpolation are adopted. The control strategy is composed of two levels, which lower-level is simple PD control and the upper-level is based on the internal average kinetic energy for multi-robot system in the complex environment with velocity damping. Simulation tests verify the effectiveness of this control strategy.

  10. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less

  11. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations.

    PubMed

    Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  12. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  13. Comparison of spatial interpolation methods for soil moisture and its application for monitoring drought.

    PubMed

    Chen, Hui; Fan, Li; Wu, Wei; Liu, Hong-Bin

    2017-09-26

    Soil moisture data can reflect valuable information on soil properties, terrain features, and drought condition. The current study compared and assessed the performance of different interpolation methods for estimating soil moisture in an area with complex topography in southwest China. The approaches were inverse distance weighting, multifarious forms of kriging, regularized spline with tension, and thin plate spline. The 5-day soil moisture observed at 167 stations and daily temperature recorded at 33 stations during the period of 2010-2014 were used in the current work. Model performance was tested with accuracy indicators of determination coefficient (R 2 ), mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and modeling efficiency (ME). The results indicated that inverse distance weighting had the best performance with R 2 , MAPE, RMSE, RRMSE, and ME of 0.32, 14.37, 13.02%, 0.16, and 0.30, respectively. Based on the best method, a spatial database of soil moisture was developed and used to investigate drought condition over the study area. The results showed that the distribution of drought was characterized by evidently regional difference. Besides, drought mainly occurred in August and September in the 5 years and was prone to happening in the western and central parts rather than in the northeastern and southeastern areas.

  14. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests.

    PubMed

    Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki

    2015-09-04

    Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull-Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm   that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON).

  15. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  16. Intensity Conserving Spectral Fitting

    NASA Technical Reports Server (NTRS)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  17. Comparing 3-dimensional virtual methods for reconstruction in craniomaxillofacial surgery.

    PubMed

    Benazzi, Stefano; Senck, Sascha

    2011-04-01

    In the present project, the virtual reconstruction of digital osteomized zygomatic bones was simulated using different methods. A total of 15 skulls were scanned using computed tomography, and a virtual osteotomy of the left zygomatic bone was performed. Next, virtual reconstructions of the missing part using mirror imaging (with and without best fit registration) and thin plate spline interpolation functions were compared with the original left zygomatic bone. In general, reconstructions using thin plate spline warping showed better results than the mirroring approaches. Nevertheless, when dealing with skulls characterized by a low degree of asymmetry, mirror imaging and subsequent registration can be considered a valid and easy solution for zygomatic bone reconstruction. The mirroring tool is one of the possible alternatives in reconstruction, but it might not always be the optimal solution (ie, when the hemifaces are asymmetrical). In the present pilot study, we have verified that best fit registration of the mirrored unaffected hemiface and thin plate spline warping achieved better results in terms of fitting accuracy, overcoming the evident limits of the mirroring approach. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.

    Quantum and anharmonic effects are investigated in (H{sub 2}){sub 2}–Li{sup +}–benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H{sub 2} molecule to the H{sub 2}–Li{sup +}–benzene complex increases the ZPE of the system by 5.6 kJ mol{sup −1} to 17.6 kJ mol{sup −1}. This ZPE is 42% of the total electronic binding energymore » of (H{sub 2}){sub 2}–Li{sup +}–benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H{sub 2} to H{sub 2}–Li{sup +}–benzene is 7.7 kJ mol{sup −1}, compared to 12.4 kJ mol{sup −1} for the first H{sub 2} molecule. Anharmonicity is found to be even more important when a second (and subsequent) H{sub 2} molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H{sub 2} molecules are found at larger distance from the Li{sup +} ion and are more confined in the θ coordinate than in H{sub 2}–Li{sup +}–benzene. They also show that both H{sub 2} molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H{sub 2} molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H{sub 2}){sub 2}–Li{sup +}–benzene PESs are developed. These use a modified Shepard interpolation for the Li{sup +}–benzene and H{sub 2}–Li{sup +}–benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H{sub 2}–H{sub 2} interaction. Because of the neglect of three-body H{sub 2}, H{sub 2}, Li{sup +} terms, both fragment PESs lead to overbinding of the second H{sub 2} molecule by 1.5 kJ mol{sup −1}. Probability density histograms, however, indicate that the wavefunctions for the two H{sub 2} molecules are effectively identical on the “full” and fragment PESs. This suggests that the 1.5 kJ mol{sup −1} error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H{sub 2}–H{sub 2} interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.« less

  19. Transactions of the Army Conference on Applied Mathematics and Computing (2nd) Held at Washington, DC on 22-25 May 1984

    DTIC Science & Technology

    1985-02-01

    0 Here Q denotes the midplane of the plate ?assumed to be a Lipschitzian) with a smooth boundary ", and H (Q) and H (Q) are the Hilbert spaces of...using a reproducing kernel Hilbert space approach, Weinert [8,9] et al, developed a structural correspondence between spline interpolation and linear...597 A Mesh Moving Technique for Time Dependent Partial Differential Equations in Two Space Dimensions David C. Arney and Joseph

  20. A 156 kyr smoothed history of the atmospheric greenhouse gases CO2, CH4, and N2O and their radiative forcing

    NASA Astrophysics Data System (ADS)

    Köhler, Peter; Nehrbass-Ahles, Christoph; Schmitt, Jochen; Stocker, Thomas F.; Fischer, Hubertus

    2017-06-01

    Continuous records of the atmospheric greenhouse gases (GHGs) CO2, CH4, and N2O are necessary input data for transient climate simulations, and their associated radiative forcing represents important components in analyses of climate sensitivity and feedbacks. Since the available data from ice cores are discontinuous and partly ambiguous, a well-documented decision process during data compilation followed by some interpolating post-processing is necessary to obtain those desired time series. Here, we document our best possible data compilation of published ice core records and recent measurements on firn air and atmospheric samples spanning the interval from the penultimate glacial maximum ( ˜ 156 kyr BP) to the beginning of the year 2016 CE. We use the most recent age scales for the ice core data and apply a smoothing spline method to translate the discrete and irregularly spaced data points into continuous time series. These splines are then used to compute the radiative forcing for each GHG using well-established, simple formulations. We compile only a Southern Hemisphere record of CH4 and discuss how much larger a Northern Hemisphere or global CH4 record might have been due to its interpolar difference. The uncertainties of the individual data points are considered in the spline procedure. Based on the given data resolution, time-dependent cutoff periods of the spline, defining the degree of smoothing, are prescribed, ranging from 5000 years for the less resolved older parts of the records to 4 years for the densely sampled recent years. The computed splines seamlessly describe the GHG evolution on orbital and millennial timescales for glacial and glacial-interglacial variations and on centennial and decadal timescales for anthropogenic times. Data connected with this paper, including raw data and final splines, are available at doi:10.1594/PANGAEA.871273.

  1. Improvements to the fastex flutter analysis computer code

    NASA Technical Reports Server (NTRS)

    Taylor, Ronald F.

    1987-01-01

    Modifications to the FASTEX flutter analysis computer code (UDFASTEX) are described. The objectives were to increase the problem size capacity of FASTEX, reduce run times by modification of the modal interpolation procedure, and to add new user features. All modifications to the program are operable on the VAX 11/700 series computers under the VAX operating system. Interfaces were provided to aid in the inclusion of alternate aerodynamic and flutter eigenvalue calculations. Plots can be made of the flutter velocity, display and frequency data. A preliminary capability was also developed to plot contours of unsteady pressure amplitude and phase. The relevant equations of motion, modal interpolation procedures, and control system considerations are described and software developments are summarized. Additional information documenting input instructions, procedures, and details of the plate spline algorithm is found in the appendices.

  2. Interactive algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Wiese, M. R.

    1986-01-01

    An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.

  3. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    USGS Publications Warehouse

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra Nevada Mountains.

  4. Validation of China-wide interpolated daily climate variables from 1960 to 2011

    NASA Astrophysics Data System (ADS)

    Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

    2015-02-01

    Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.

  5. PM10 modeling in the Oviedo urban area (Northern Spain) by using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza

    2014-10-01

    The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the multivariate adaptive regression splines (MARS) technique, conclusions of this research work are exposed.

  6. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age.

    PubMed

    Wilke, Marko

    2018-02-01

    This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.

  7. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    PubMed

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  8. Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks

    NASA Astrophysics Data System (ADS)

    Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas

    2017-03-01

    PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.

  9. Modelling lecturer performance index of private university in Tulungagung by using survival analysis with multivariate adaptive regression spline

    NASA Astrophysics Data System (ADS)

    Hasyim, M.; Prastyo, D. D.

    2018-03-01

    Survival analysis performs relationship between independent variables and survival time as dependent variable. In fact, not all survival data can be recorded completely by any reasons. In such situation, the data is called censored data. Moreover, several model for survival analysis requires assumptions. One of the approaches in survival analysis is nonparametric that gives more relax assumption. In this research, the nonparametric approach that is employed is Multivariate Regression Adaptive Spline (MARS). This study is aimed to measure the performance of private university’s lecturer. The survival time in this study is duration needed by lecturer to obtain their professional certificate. The results show that research activities is a significant factor along with developing courses material, good publication in international or national journal, and activities in research collaboration.

  10. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1984-01-01

    The objective of this investigation is to develop a state-of-the-art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies. A three-dimensional multivariate O/I analysis scheme has been developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  11. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1985-01-01

    The development of a state of the art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies was investigated. A three dimensional multivariate O/I analysis scheme was developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  12. Modelling the Velocity Field in a Regular Grid in the Area of Poland on the Basis of the Velocities of European Permanent Stations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard

    2014-06-01

    The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.

  13. Solution identification and quantitative analysis of fiber-capacitive drop analyzer based on multivariate statistical methods

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Qiu, Zurong; Huo, Xinming; Fan, Yuming; Li, Xinghua

    2017-03-01

    A fiber-capacitive drop analyzer is an instrument which monitors a growing droplet to produce a capacitive opto-tensiotrace (COT). Each COT is an integration of fiber light intensity signals and capacitance signals and can reflect the unique physicochemical property of a liquid. In this study, we propose a solution analytical and concentration quantitative method based on multivariate statistical methods. Eight characteristic values are extracted from each COT. A series of COT characteristic values of training solutions at different concentrations compose a data library of this kind of solution. A two-stage linear discriminant analysis is applied to analyze different solution libraries and establish discriminant functions. Test solutions can be discriminated by these functions. After determining the variety of test solutions, Spearman correlation test and principal components analysis are used to filter and reduce dimensions of eight characteristic values, producing a new representative parameter. A cubic spline interpolation function is built between the parameters and concentrations, based on which we can calculate the concentration of the test solution. Methanol, ethanol, n-propanol, and saline solutions are taken as experimental subjects in this paper. For each solution, nine or ten different concentrations are chosen to be the standard library, and the other two concentrations compose the test group. By using the methods mentioned above, all eight test solutions are correctly identified and the average relative error of quantitative analysis is 1.11%. The method proposed is feasible which enlarges the applicable scope of recognizing liquids based on the COT and improves the concentration quantitative precision, as well.

  14. RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.

    PubMed

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.

  15. Cortical surface registration using spherical thin-plate spline with sulcal lines and mean curvature as features.

    PubMed

    Park, Hyunjin; Park, Jun-Sung; Seong, Joon-Kyung; Na, Duk L; Lee, Jong-Min

    2012-04-30

    Analysis of cortical patterns requires accurate cortical surface registration. Many researchers map the cortical surface onto a unit sphere and perform registration of two images defined on the unit sphere. Here we have developed a novel registration framework for the cortical surface based on spherical thin-plate splines. Small-scale composition of spherical thin-plate splines was used as the geometric interpolant to avoid folding in the geometric transform. Using an automatic algorithm based on anisotropic skeletons, we extracted seven sulcal lines, which we then incorporated as landmark information. Mean curvature was chosen as an additional feature for matching between spherical maps. We employed a two-term cost function to encourage matching of both sulcal lines and the mean curvature between the spherical maps. Application of our registration framework to fifty pairwise registrations of T1-weighted MRI scans resulted in improved registration accuracy, which was computed from sulcal lines. Our registration approach was tested as an additional procedure to improve an existing surface registration algorithm. Our registration framework maintained an accurate registration over the sulcal lines while significantly increasing the cross-correlation of mean curvature between the spherical maps being registered. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. GRID3D-v2: An updated version of the GRID2D/3D computer program for generating grid systems in complex-shaped three-dimensional spatial domains

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.

    1991-01-01

    In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.

  17. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  18. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  19. Convergence Rates for Multivariate Smoothing Spline Functions.

    DTIC Science & Technology

    1982-10-01

    GAI (,T) g (T)dT - g In order to show convergence of the series and obtain bounds on the terms, we need to estimate £ Now (1 + Ay v) AyV ( g ,#V...Cox* Technical Summary Report #2437 October 1982 ABSTRACT Given data z i - g (ti ) + ci, 1 4 i 4 n, where g is the unknown function, the ti are unknown...d-dimensional variables in a domain fl, and the ei are i.i.d. random errors, the smoothing spline estimate g n is defined to be the

  20. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.

  1. Analysis of the cylinder’s movement characteristics after entering water based on CFD

    NASA Astrophysics Data System (ADS)

    Liu, Xianlong

    2017-10-01

    It’s a variable speed motion after the cylinder vertical entry the water. Using dynamic mesh is mostly unstructured grid, and the calculation results are not ideal and consume huge computing resources. CFD method is used to calculate the resistance of the cylinder at different velocities. Cubic spline interpolation method is used to obtain the resistance at fixed speeds. The finite difference method is used to solve the motion equation, and the acceleration, velocity, displacement and other physical quantities are obtained after the cylinder enters the water.

  2. An improved transmutation method for quantitative determination of the components in multicomponent overlapping chromatograms.

    PubMed

    Shao, Xueguang; Yu, Zhengliang; Ma, Chaoxiong

    2004-06-01

    An improved method is proposed for the quantitative determination of multicomponent overlapping chromatograms based on a known transmutation method. To overcome the main limitation of the transmutation method caused by the oscillation generated in the transmutation process, two techniques--wavelet transform smoothing and the cubic spline interpolation for reducing data points--were adopted, and a new criterion was also developed. By using the proposed algorithm, the oscillation can be suppressed effectively, and quantitative determination of the components in both the simulated and experimental overlapping chromatograms is successfully obtained.

  3. An Investigation of Multivariate Adaptive Regression Splines for Modeling and Analysis of Univariate and Semi-Multivariate Time Series Systems

    DTIC Science & Technology

    1991-09-01

    However, there is no guarantee that this would work; for instance if the data were generated by an ARCH model (Tong, 1990 pp. 116-117) then a simple...Hill, R., Griffiths, W., Lutkepohl, H., and Lee, T., Introduction to the Theory and Practice of Econometrics , 2th ed., Wiley, 1985. Kendall, M., Stuart

  4. Relationships between response surfaces for tablet characteristics of placebo and API-containing tablets manufactured by direct compression method.

    PubMed

    Hayashi, Yoshihiro; Tsuji, Takahiro; Shirotori, Kaede; Oishi, Takuya; Kosugi, Atsushi; Kumada, Shungo; Hirai, Daijiro; Takayama, Kozo; Onuki, Yoshinori

    2017-10-30

    In this study, we evaluated the correlation between the response surfaces for the tablet characteristics of placebo and active pharmaceutical ingredient (API)-containing tablets. The quantities of lactose, cornstarch, and microcrystalline cellulose were chosen as the formulation factors. Ten tablet formulations were prepared. The tensile strength (TS) and disintegration time (DT) of tablets were measured as tablet characteristics. The response surfaces for TS and DT were estimated using a nonlinear response surface method incorporating multivariate spline interpolation, and were then compared with those of placebo tablets. A correlation was clearly observed for TS and DT of all APIs, although the value of the response surfaces for TS and DT was highly dependent on the type of API used. Based on this knowledge, the response surfaces for TS and DT of API-containing tablets were predicted from only two and four formulations using regression expression and placebo tablet data, respectively. The results from the evaluation of prediction accuracy showed that this method accurately predicted TS and DT, suggesting that it could construct a reliable response surface for TS and DT with a small number of samples. This technique assists in the effective estimation of the relationships between design variables and pharmaceutical responses during pharmaceutical development. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    PubMed

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  6. Automatic Lung Lobe Segmentation Using Particles, Thin Plate Splines, and Maximum a Posteriori Estimation

    PubMed Central

    Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.

    2011-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396

  7. A Gridded Daily Min/Max Temperature Dataset With 0.1° Resolution for the Yangtze River Valley and its Error Estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Qiufen; Hu, Jianglin

    2013-05-01

    The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.

  8. [Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].

    PubMed

    Vanegas, Jairo; Vásquez, Fabián

    Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Adaptive Multilinear Tensor Product Wavelets

    DOE PAGES

    Weiss, Kenneth; Lindstrom, Peter

    2015-08-12

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how tomore » generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. In conclusion, we focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.« less

  10. Holes in the ocean: Filling voids in bathymetric lidar data

    NASA Astrophysics Data System (ADS)

    Coleman, John B.; Yao, Xiaobai; Jordan, Thomas R.; Madden, Marguertie

    2011-04-01

    The mapping of coral reefs may be efficiently accomplished by the use of airborne laser bathymetry. However, there are often data holes within the bathymetry data which must be filled in order to produce a complete representation of the coral habitat. This study presents a method to fill these data holes through data merging and interpolation. The method first merges ancillary digital sounding data with airborne laser bathymetry data in order to populate data points in all areas but particularly those of data holes. What follows is to generate an elevation surface by spatial interpolation based on the merged data points obtained in the first step. We conduct a case study of the Dry Tortugas National Park in Florida and produced an enhanced digital elevation model in the ocean with this method. Four interpolation techniques, including Kriging, natural neighbor, spline, and inverse distance weighted, are implemented and evaluated on their ability to accurately and realistically represent the shallow-water bathymetry of the study area. The natural neighbor technique is found to be the most effective. Finally, this enhanced digital elevation model is used in conjunction with Ikonos imagery to produce a complete, three-dimensional visualization of the study area.

  11. Application of spatial methods to identify areas with lime requirement in eastern Croatia

    NASA Astrophysics Data System (ADS)

    Bogunović, Igor; Kisic, Ivica; Mesic, Milan; Zgorelec, Zeljka; Percin, Aleksandra; Pereira, Paulo

    2016-04-01

    With more than 50% of acid soils in all agricultural land in Croatia, soil acidity is recognized as a big problem. Low soil pH leads to a series of negative phenomena in plant production and therefore as a compulsory measure for reclamation of acid soils is liming, recommended on the base of soil analysis. The need for liming is often erroneously determined only on the basis of the soil pH, because the determination of cation exchange capacity, the hydrolytic acidity and base saturation is a major cost to producers. Therefore, in Croatia, as well as some other countries, the amount of liming material needed to ameliorate acid soils is calculated by considering their hydrolytic acidity. For this research, several interpolation methods were tested to identify the best spatial predictor of hidrolitic acidity. The purpose of this study was to: test several interpolation methods to identify the best spatial predictor of hidrolitic acidity; and to determine the possibility of using multivariate geostatistics in order to reduce the number of needed samples for determination the hydrolytic acidity, all with an aim that the accuracy of the spatial distribution of liming requirement is not significantly reduced. Soil pH (in KCl) and hydrolytic acidity (Y1) is determined in the 1004 samples (from 0-30 cm) randomized collected in agricultural fields near Orahovica in eastern Croatia. This study tested 14 univariate interpolation models (part of ArcGIS software package) in order to provide most accurate spatial map of hydrolytic acidity on a base of: all samples (Y1 100%), and the datasets with 15% (Y1 85%), 30% (Y1 70%) and 50% fewer samples (Y1 50%). Parallel to univariate interpolation methods, the precision of the spatial distribution of the Y1 was tested by the co-kriging method with exchangeable acidity (pH in KCl) as a covariate. The soils at studied area had an average pH (KCl) 4,81, while the average Y1 10,52 cmol+ kg-1. These data suggest that liming is necessary agrotechnical measure for soil conditioning. The results show that ordinary kriging was most accurate univariate interpolation method with smallest error (RMSE) in all four data sets, while the least precise showed Radial Basis Functions (Thin Plate Spline and Inverse Multiquadratic). Furthermore, it is noticeable a trend of increasing errors (RMSE) with a reduced number of samples tested on the most accurate univariate interpolation model: 3,096 (Y1 100%), 3,258 (Y1 85%), 3,317 (Y1 70%), 3,546 (Y1 50%). The best-fit semivariograms show a strong spatial dependence in Y1 100% (Nugget/Sill 20.19) and Y1 85% (Nugget/Sill 23.83), while a further reduction of the number of samples resulted with moderate spatial dependence (Y1 70% -35,85% and Y1 50% - 32,01). Co-kriging method resulted in a reduction in RMSE compared with univariate interpolation methods for each data set with: 2,054, 1,731 and 1,734 for Y1 85%, Y1 70%, Y1 50%, respectively. The results show the possibility for reducing sampling costs by using co-kriging method which is useful from the practical viewpoint. Reduced number of samples by half for determination of hydrolytic acidity in the interaction with the soil pH provides a higher precision for variable liming compared to the univariate interpolation methods of the entire set of data. These data provide new opportunities to reduce costs in the practical plant production in Croatia.

  12. Artifacts Introduced in the Point Evaluation of Functions Expanded into a Degree 360 Spherical Harmonic Series

    NASA Technical Reports Server (NTRS)

    Rapp, R.

    1999-01-01

    An expansion of a function initially given in 1deg cells was carried out to degree 360 by using 30'cells whose value was initially assigned to be the value of the 1deg cell in which it fell. The evaluation of point values of the function from the degree 360 expansion revealed spurious patterns attributed to the coefficients from degree 181 to 360. Expansion of the original function in 1deg cells to degree 180 showed no problems in the point evaluation. Mean 1deg values computed from both degree 180 to 360 expansions showed close agreement with the original function. The artifacts could be removed if the 30' values were interpolated by spline procedures from adjacent I' cells. These results led to an examination of the gravity anomalies and geoid undulations from EGM96 in areas where I' values were "split up" to form 30'cells. The area considered was 75degS to 85degS, 100degE to 120degE where the split up cells were basically south of 81 degS. A small, latitude related, and possibly spurious effect might be detectable in anomaly variations in the region. These results suggest that point values of a function computed from a high degree expansion may have spurious signals unless the cell size is compatible with the maximum degree of expansion. The spurious signals could be eliminated by using a spline interpolation procedure to obtain the 30'values from the 1deg values.

  13. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  14. Precipitation Interpolation by Multivariate Bayesian Maximum Entropy Based on Meteorological Data in Yun- Gui-Guang region, Mainland China

    NASA Astrophysics Data System (ADS)

    Wang, Chaolin; Zhong, Shaobo; Zhang, Fushen; Huang, Quanyi

    2016-11-01

    Precipitation interpolation has been a hot area of research for many years. It had close relation to meteorological factors. In this paper, precipitation from 91 meteorological stations located in and around Yunnan, Guizhou and Guangxi Zhuang provinces (or autonomous region), Mainland China was taken into consideration for spatial interpolation. Multivariate Bayesian maximum entropy (BME) method with auxiliary variables, including mean relative humidity, water vapour pressure, mean temperature, mean wind speed and terrain elevation, was used to get more accurate regional distribution of annual precipitation. The means, standard deviations, skewness and kurtosis of meteorological factors were calculated. Variogram and cross- variogram were fitted between precipitation and auxiliary variables. The results showed that the multivariate BME method was precise with hard and soft data, probability density function. Annual mean precipitation was positively correlated with mean relative humidity, mean water vapour pressure, mean temperature and mean wind speed, negatively correlated with terrain elevation. The results are supposed to provide substantial reference for research of drought and waterlog in the region.

  15. Evaluating motion processing algorithms for use with functional near-infrared spectroscopy data from young children.

    PubMed

    Delgado Reyes, Lourdes M; Bohache, Kevin; Wijeakumar, Sobanawartiny; Spencer, John P

    2018-04-01

    Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal components analysis (PCA), correlation-based signal improvement (CBSI), wavelet filtering, and spline interpolation. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Brigadoi et al. compared motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Given that fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. This study addresses that problem by evaluating motion correction algorithms implemented in HomER2. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response. Results showed that targeted PCA (tPCA), spline, and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using quantitative metrics. The CBSI method corrected many of the artifacts present in our data; however, this approach produced sometimes unstable HRFs. The targeted PCA and spline methods proved to be the most robust, performing well across all comparison metrics. When compared head to head, tPCA consistently outperformed spline. We conclude, therefore, that tPCA is an effective technique for correcting motion artifacts in fNIRS data from young children.

  16. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  17. Trajectory Generation by Piecewise Spline Interpolation

    DTIC Science & Technology

    1976-04-01

    Lx) -a 0 + atx + aAx + x (21)0 1 2 3 and the coefficients are obtained from Equation (20) as ao m fl (22)i al " fi, (23) S3(fi + I f ) 2fj + fj+ 1 (24...reference frame to the vehicle fixed frame is pTO’ 0TO’ OTO’ *TO where a if (gZv0 - A >- 0 aCI (64) - azif (gzv0- AzvO < 0 These rotations may be...velocity frame axes directions (velocity frame from the output frame) aO, al , a 2 , a 3 Coefficients of the piecewise cubic polynomials [B ] Tridiagonal

  18. Prediction of energy expenditure and physical activity in preschoolers

    USDA-ARS?s Scientific Manuscript database

    Accurate, nonintrusive, and feasible methods are needed to predict energy expenditure (EE) and physical activity (PA) levels in preschoolers. Herein, we validated cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on accelerometry and heart rate (HR) ...

  19. A MATLAB®-based program for 3D visualization of stratigraphic setting and subsidence evolution of sedimentary basins: example application to the Vienna Basin

    NASA Astrophysics Data System (ADS)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2015-04-01

    In recent years, 3D visualization of sedimentary basins has become increasingly popular. Stratigraphic and structural mapping is highly important to understand the internal setting of sedimentary basins. And subsequent subsidence analysis provides significant insights for basin evolution. This study focused on developing a simple and user-friendly program which allows geologists to analyze and model sedimentary basin data. The developed program is aimed at stratigraphic and subsidence modelling of sedimentary basins from wells or stratigraphic profile data. This program is mainly based on two numerical methods; surface interpolation and subsidence analysis. For surface visualization four different interpolation techniques (Linear, Natural, Cubic Spline, and Thin-Plate Spline) are provided in this program. The subsidence analysis consists of decompaction and backstripping techniques. The numerical methods are computed in MATLAB® which is a multi-paradigm numerical computing environment used extensively in academic, research, and industrial fields. This program consists of five main processing steps; 1) setup (study area and stratigraphic units), 2) loading of well data, 3) stratigraphic modelling (depth distribution and isopach plots), 4) subsidence parameter input, and 5) subsidence modelling (subsided depth and subsidence rate plots). The graphical user interface intuitively guides users through all process stages and provides tools to analyse and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the visualization results using the full range of available plot options in MATLAB. All functions of this program are illustrated with a case study of Miocene sediments in the Vienna Basin. The basin is an ideal place to test this program, because sufficient data is available to analyse and model stratigraphic setting and subsidence evolution of the basin. The study area covers approximately 1200 km2 including 110 data points in the central part of the Vienna Basin.

  20. Measurement of intervertebral cervical motion by means of dynamic x-ray image processing and data interpolation.

    PubMed

    Bifulco, Paolo; Cesarelli, Mario; Romano, Maria; Fratini, Antonio; Sansone, Mario

    2013-01-01

    Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements.

  1. Estimation of missing values in solar radiation data using piecewise interpolation methods: Case study at Penang city

    NASA Astrophysics Data System (ADS)

    Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu

    2015-12-01

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.

  2. A comparison of spatial interpolation methods for soil temperature over a complex topographical region

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tang, Xiao-Ping; Ma, Xue-Qing; Liu, Hong-Bin

    2016-08-01

    Soil temperature variability data provide valuable information on understanding land-surface ecosystem processes and climate change. This study developed and analyzed a spatial dataset of monthly mean soil temperature at a depth of 10 cm over a complex topographical region in southwestern China. The records were measured at 83 stations during the period of 1961-2000. Nine approaches were compared for interpolating soil temperature. The accuracy indicators were root mean square error (RMSE), modelling efficiency (ME), and coefficient of residual mass (CRM). The results indicated that thin plate spline with latitude, longitude, and elevation gave the best performance with RMSE varying between 0.425 and 0.592 °C, ME between 0.895 and 0.947, and CRM between -0.007 and 0.001. A spatial database was developed based on the best model. The dataset showed that larger seasonal changes of soil temperature were from autumn to winter over the region. The northern and eastern areas with hilly and low-middle mountains experienced larger seasonal changes.

  3. Newer classification and regression tree techniques: Bagging and Random Forests for ecological prediction

    Treesearch

    Anantha M. Prasad; Louis R. Iverson; Andy Liaw; Andy Liaw

    2006-01-01

    We evaluated four statistical models - Regression Tree Analysis (RTA), Bagging Trees (BT), Random Forests (RF), and Multivariate Adaptive Regression Splines (MARS) - for predictive vegetation mapping under current and future climate scenarios according to the Canadian Climate Centre global circulation model.

  4. A RUTCOR Project on Discrete Applied Mathematics

    DTIC Science & Technology

    1989-01-30

    the more important results of this work is the possibility that Groebner basis methods of computational commutative algebra might lead to effective...Billera, L.J., " Groebner Basis Methods for Multivariate Splines," prepared for the Proceedings of the Oslo Conference on Computer-aided Geometric Design

  5. Multivariate qualitative analysis of banned additives in food safety using surface enhanced Raman scattering spectroscopy

    NASA Astrophysics Data System (ADS)

    He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei

    2015-02-01

    A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.

  6. A New Approach of Juvenile Age Estimation using Measurements of the Ilium and Multivariate Adaptive Regression Splines (MARS) Models for Better Age Prediction.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal

    2017-01-01

    Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.

  7. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  8. Integrating Growth Variability of the Ilium, Fifth Lumbar Vertebra, and Clavicle with Multivariate Adaptive Regression Splines Models for Subadult Age Estimation.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Telmon, Norbert; Chaumoitre, Kathia; Adalian, Pascal

    2018-05-31

    Subadult age estimation should rely on sampling and statistical protocols capturing development variability for more accurate age estimates. In this perspective, measurements were taken on the fifth lumbar vertebrae and/or clavicles of 534 French males and females aged 0-19 years and the ilia of 244 males and females aged 0-12 years. These variables were fitted in nonparametric multivariate adaptive regression splines (MARS) models with 95% prediction intervals (PIs) of age. The models were tested on two independent samples from Marseille and the Luis Lopes reference collection from Lisbon. Models using ilium width and module, maximum clavicle length, and lateral vertebral body heights were more than 92% accurate. Precision was lower for postpubertal individuals. Integrating punctual nonlinearities of the relationship between age and the variables and dynamic prediction intervals incorporated the normal increase in interindividual growth variability (heteroscedasticity of variance) with age for more biologically accurate predictions. © 2018 American Academy of Forensic Sciences.

  9. Modelling daily dissolved oxygen concentration using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    NASA Astrophysics Data System (ADS)

    Heddam, Salim; Kisi, Ozgur

    2018-04-01

    In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.

  10. Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment

    NASA Astrophysics Data System (ADS)

    Bernauer, J. C.; Diefenbach, J.; Elbakian, G.; Gavrilov, G.; Goerrissen, N.; Hasell, D. K.; Henderson, B. S.; Holler, Y.; Karyan, G.; Ludwig, J.; Marukyan, H.; Naryshkin, Y.; O'Connor, C.; Russell, R. L.; Schmidt, A.; Schneekloth, U.; Suvorov, K.; Veretennikov, D.

    2016-07-01

    The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.

  11. GENIE(++): A Multi-Block Structured Grid System

    NASA Technical Reports Server (NTRS)

    Williams, Tonya; Nadenthiran, Naren; Thornburg, Hugh; Soni, Bharat K.

    1996-01-01

    The computer code GENIE++ is a continuously evolving grid system containing a multitude of proven geometry/grid techniques. The generation process in GENIE++ is based on an earlier version. The process uses several techniques either separately or in combination to quickly and economically generate sculptured geometry descriptions and grids for arbitrary geometries. The computational mesh is formed by using an appropriate algebraic method. Grid clustering is accomplished with either exponential or hyperbolic tangent routines which allow the user to specify a desired point distribution. Grid smoothing can be accomplished by using an elliptic solver with proper forcing functions. B-spline and Non-Uniform Rational B-splines (NURBS) algorithms are used for surface definition and redistribution. The built in sculptured geometry definition with desired distribution of points, automatic Bezier curve/surface generation for interior boundaries/surfaces, and surface redistribution is based on NURBS. Weighted Lagrance/Hermite transfinite interpolation methods, interactive geometry/grid manipulation modules, and on-line graphical visualization of the generation process are salient features of this system which result in a significant time savings for a given geometry/grid application.

  12. Beta-function B-spline smoothing on triangulations

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Zanaty, Peter

    2013-03-01

    In this work we investigate a novel family of Ck-smooth rational basis functions on triangulations for fitting, smoothing, and denoising geometric data. The introduced basis function is closely related to a recently introduced general method introduced in utilizing generalized expo-rational B-splines, which provides Ck-smooth convex resolutions of unity on very general disjoint partitions and overlapping covers of multidimensional domains with complex geometry. One of the major advantages of this new triangular construction is its locality with respect to the star-1 neighborhood of the vertex on which the said base is providing Hermite interpolation. This locality of the basis functions can be in turn utilized in adaptive methods, where, for instance a local refinement of the underlying triangular mesh affects only the refined domain, whereas, in other method one needs to investigate what changes are occurring outside of the refined domain. Both the triangular and the general smooth constructions have the potential to become a new versatile tool of Computer Aided Geometric Design (CAGD), Finite and Boundary Element Analysis (FEA/BEA) and Iso-geometric Analysis (IGA).

  13. TARGETED PRINCIPLE COMPONENT ANALYSIS: A NEW MOTION ARTIFACT CORRECTION APPROACH FOR NEAR-INFRARED SPECTROSCOPY

    PubMed Central

    YÜCEL, MERYEM A.; SELB, JULIETTE; COOPER, ROBERT J.; BOAS, DAVID A.

    2014-01-01

    As near-infrared spectroscopy (NIRS) broadens its application area to different age and disease groups, motion artifacts in the NIRS signal due to subject movement is becoming an important challenge. Motion artifacts generally produce signal fluctuations that are larger than physiological NIRS signals, thus it is crucial to correct for them before obtaining an estimate of stimulus evoked hemodynamic responses. There are various methods for correction such as principle component analysis (PCA), wavelet-based filtering and spline interpolation. Here, we introduce a new approach to motion artifact correction, targeted principle component analysis (tPCA), which incorporates a PCA filter only on the segments of data identified as motion artifacts. It is expected that this will overcome the issues of filtering desired signals that plagues standard PCA filtering of entire data sets. We compared the new approach with the most effective motion artifact correction algorithms on a set of data acquired simultaneously with a collodion-fixed probe (low motion artifact content) and a standard Velcro probe (high motion artifact content). Our results show that tPCA gives statistically better results in recovering hemodynamic response function (HRF) as compared to wavelet-based filtering and spline interpolation for the Velcro probe. It results in a significant reduction in mean-squared error (MSE) and significant enhancement in Pearson’s correlation coefficient to the true HRF. The collodion-fixed fiber probe with no motion correction performed better than the Velcro probe corrected for motion artifacts in terms of MSE and Pearson’s correlation coefficient. Thus, if the experimental study permits, the use of a collodion-fixed fiber probe may be desirable. If the use of a collodion-fixed probe is not feasible, then we suggest the use of tPCA in the processing of motion artifact contaminated data. PMID:25360181

  14. Modeled distribution and abundance of a pelagic seabird reveal trends in relation to fisheries

    USGS Publications Warehouse

    Renner, Martin; Parrish, Julia K.; Piatt, John F.; Kuletz, Kathy J.; Edwards, Ann E.; Hunt, George L.

    2013-01-01

    The northern fulmar Fulmarus glacialis is one of the most visible and widespread seabirds in the eastern Bering Sea and Aleutian Islands. However, relatively little is known about its abundance, trends, or the factors that shape its distribution. We used a long-term pelagic dataset to model changes in fulmar at-sea distribution and abundance since the mid-1970s. We used an ensemble model, based on a weighted average of generalized additive model (GAM), multivariate adaptive regression splines (MARS), and random forest models to estimate the pelagic distribution and density of fulmars in the waters of the Aleutian Archipelago and Bering Sea. The most important predictor variables were colony effect, sea surface temperature, distribution of fisheries, location, and primary productivity. We calculated a time series from the ratio of observed to predicted values and found that fulmar at-sea abundance declined from the 1970s to the 2000s at a rate of 0.83% (± 0.39% SE) per annum. Interpolating fulmar densities on a spatial grid through time, we found that the center of fulmar distribution in the Bering Sea has shifted north, coinciding with a northward shift in fish catches and a warming ocean. Our study shows that fisheries are an important, but not the only factor, shaping fulmar distribution and abundance trends in the eastern Bering Sea and Aleutian Islands.

  15. Latent structure analysis of the process variables and pharmaceutical responses of an orally disintegrating tablet.

    PubMed

    Hayashi, Yoshihiro; Oshima, Etsuko; Maeda, Jin; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo

    2012-01-01

    A multivariate statistical technique was applied to the design of an orally disintegrating tablet and to clarify the causal correlation among variables of the manufacturing process and pharmaceutical responses. Orally disintegrating tablets (ODTs) composed mainly of mannitol were prepared via the wet-granulation method using crystal transition from the δ to the β form of mannitol. Process parameters (water amounts (X(1)), kneading time (X(2)), compression force (X(3)), and amounts of magnesium stearate (X(4))) were optimized using a nonlinear response surface method (RSM) incorporating a thin plate spline interpolation (RSM-S). The results of a verification study revealed that the experimental responses, such as tensile strength and disintegration time, coincided well with the predictions. A latent structure analysis of the pharmaceutical formulations of the tablet performed using a Bayesian network led to the clear visualization of a causal connection among variables of the manufacturing process and tablet characteristics. The quantity of β-mannitol in the granules (Q(β)) was affected by X(2) and influenced all granule properties. The specific surface area of the granules was affected by X(1) and Q(β) and had an effect on all tablet characteristics. Moreover, the causal relationships among the variables were clarified by inferring conditional probability distributions. These techniques provide a better understanding of the complicated latent structure among variables of the manufacturing process and tablet characteristics.

  16. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    NASA Astrophysics Data System (ADS)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.

  17. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    NASA Astrophysics Data System (ADS)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2017-12-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.

  18. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    NASA Astrophysics Data System (ADS)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2018-01-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.

  19. On the interpolation of volumetric water content in research catchments

    NASA Astrophysics Data System (ADS)

    Dlamini, Phesheya; Chaplot, Vincent

    Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.

  20. A RUTCOR Project in Discrete Applied Mathematics

    DTIC Science & Technology

    1990-02-20

    representations of smooth piecewise polynomial functions over triangulated regions have led in particular to the conclusion that Groebner basis methods of...Reversing Number of a Digraph," in preparation. 4. Billera, L.J., and Rose, L.L., " Groebner Basis Methods for Multivariate Splines," RRR 1-89, January

  1. Elliptic surface grid generation in three-dimensional space

    NASA Technical Reports Server (NTRS)

    Kania, Lee

    1992-01-01

    A methodology for surface grid generation in three dimensional space is described. The method solves a Poisson equation for each coordinate on arbitrary surfaces using successive line over-relaxation. The complete surface curvature terms were discretized and retained within the nonhomogeneous term in order to preserve surface definition; there is no need for conventional surface splines. Control functions were formulated to permit control of grid orthogonality and spacing. A method for interpolation of control functions into the domain was devised which permits their specification not only at the surface boundaries but within the interior as well. An interactive surface generation code which makes use of this methodology is currently under development.

  2. Craniofacial morphometric analysis of mandibular prognathism.

    PubMed

    Chang, H P; Liu, P H; Yang, Y H; Lin, H C; Chang, C H

    2006-03-01

    The purpose of this study was to provide more information about the morphological characteristics of the craniofacial complex in mandibular prognathism. Forty young adult males having mandibular prognathism were compared with 40 having normal occlusion. This was conducted to carry out geometric morphometric assessments to localize alterations, using Procrustes analysis and thin-plate spline analysis, in addition to conventional cephalometric techniques. Procrustes analysis indicated that the mean craniofacial, midfacial and mandibular morphology was significantly different in prognathic subjects compared with normal controls. This finding was corroborated by the multivariate Hotelling T(2)-test of cephalometric variables. Mandibular prognathism demonstrated a shorter and slightly retropositioned maxilla, a greater total length and anterior positioning of the mandible. Thin-plate spline analysis revealed a developmental diminution of the palatomaxillary region anteroposteriorly and a developmental elongation of the mandible anteroposteriorly, leading to the appearance of a prognathic mandibular profile. In conclusion, thin-plate spline analysis seems to provide a valuable supplement for conventional cephalometric analysis because the complex patterns of craniofacial shape change are visualized suggestive by means of grid deformations.

  3. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  4. Prediction of energy expenditure from heart rate and accelerometry in children and adolescents using multivariate adaptive regression splines modeling

    USDA-ARS?s Scientific Manuscript database

    Free-living measurements of 24-h total energy expenditure (TEE) and activity energy expenditure (AEE) are required to better understand the metabolic, physiological, behavioral, and environmental factors affecting energy balance and contributing to the global epidemic of childhood obesity. The spec...

  5. A New Predictive Model of Centerline Segregation in Continuous Cast Steel Slabs by Using Multivariate Adaptive Regression Splines Approach

    PubMed Central

    García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María

    2015-01-01

    The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.

  6. Multivariate qualitative analysis of banned additives in food safety using surface enhanced Raman scattering spectroscopy.

    PubMed

    He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei

    2015-02-25

    A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Design and Testing of a Flexible Inclinometer Probe for Model Tests of Landslide Deep Displacement Measurement.

    PubMed

    Zhang, Yongquan; Tang, Huiming; Li, Changdong; Lu, Guiying; Cai, Yi; Zhang, Junrong; Tan, Fulin

    2018-01-14

    The physical model test of landslides is important for studying landslide structural damage, and parameter measurement is key in this process. To meet the measurement requirements for deep displacement in landslide physical models, an automatic flexible inclinometer probe with good coupling and large deformation capacity was designed. The flexible inclinometer probe consists of several gravity acceleration sensing units that are protected and positioned by silicon encapsulation, all the units are connected to a 485-comunication bus. By sensing the two-axis tilt angle, the direction and magnitude of the displacement for a measurement unit can be calculated, then the overall displacement is accumulated according to all units, integrated from bottom to top in turn. In the conversion from angle to displacement, two spline interpolation methods are introduced to correct and resample the data; one is to interpolate the displacement after conversion, and the other is to interpolate the angle before conversion; compared with the result read from checkered paper, the latter is proved to have a better effect, with an additional condition that the displacement curve move up half the length of the unit. The flexible inclinometer is verified with respect to its principle and arrangement by a laboratory physical model test, and the test results are highly consistent with the actual deformation of the landslide model.

  8. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  9. Design and Testing of a Flexible Inclinometer Probe for Model Tests of Landslide Deep Displacement Measurement

    PubMed Central

    Zhang, Yongquan; Tang, Huiming; Li, Changdong; Lu, Guiying; Cai, Yi; Zhang, Junrong; Tan, Fulin

    2018-01-01

    The physical model test of landslides is important for studying landslide structural damage, and parameter measurement is key in this process. To meet the measurement requirements for deep displacement in landslide physical models, an automatic flexible inclinometer probe with good coupling and large deformation capacity was designed. The flexible inclinometer probe consists of several gravity acceleration sensing units that are protected and positioned by silicon encapsulation, all the units are connected to a 485-comunication bus. By sensing the two-axis tilt angle, the direction and magnitude of the displacement for a measurement unit can be calculated, then the overall displacement is accumulated according to all units, integrated from bottom to top in turn. In the conversion from angle to displacement, two spline interpolation methods are introduced to correct and resample the data; one is to interpolate the displacement after conversion, and the other is to interpolate the angle before conversion; compared with the result read from checkered paper, the latter is proved to have a better effect, with an additional condition that the displacement curve move up half the length of the unit. The flexible inclinometer is verified with respect to its principle and arrangement by a laboratory physical model test, and the test results are highly consistent with the actual deformation of the landslide model. PMID:29342902

  10. Flip-avoiding interpolating surface registration for skull reconstruction.

    PubMed

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  11. A characteristics-based method for solving the transport equation and its application to the process of mantle differentiation and continental root growth

    NASA Astrophysics Data System (ADS)

    de Smet, Jeroen H.; van den Berg, Arie P.; Vlaar, Nico J.; Yuen, David A.

    2000-03-01

    Purely advective transport of composition is of major importance in the Geosciences, and efficient and accurate solution methods are needed. A characteristics-based method is used to solve the transport equation. We employ a new hybrid interpolation scheme, which allows for the tuning of stability and accuracy through a threshold parameter ɛth. Stability is established by bilinear interpolations, and bicubic splines are used to maintain accuracy. With this scheme, numerical instabilities can be suppressed by allowing numerical diffusion to work in time and locally in space. The scheme can be applied efficiently for preliminary modelling purposes. This can be followed by detailed high-resolution experiments. First, the principal effects of this hybrid interpolation method are illustrated and some tests are presented for numerical solutions of the transport equation. Second, we illustrate that this approach works successfully for a previously developed continental evolution model for the convecting upper mantle. In this model the transport equation contains a source term, which describes the melt production in pressure-released partial melting. In this model, a characteristic phenomenon of small-scale melting diapirs is observed (De Smet et al.1998; De Smet et al. 1999). High-resolution experiments with grid cells down to 700m horizontally and 515m vertically result in highly detailed observations of the diapiric melting phenomenon.

  12. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  13. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  14. Topographic relationships for design rainfalls over Australia

    NASA Astrophysics Data System (ADS)

    Johnson, F.; Hutchinson, M. F.; The, C.; Beesley, C.; Green, J.

    2016-02-01

    Design rainfall statistics are the primary inputs used to assess flood risk across river catchments. These statistics normally take the form of Intensity-Duration-Frequency (IDF) curves that are derived from extreme value probability distributions fitted to observed daily, and sub-daily, rainfall data. The design rainfall relationships are often required for catchments where there are limited rainfall records, particularly catchments in remote areas with high topographic relief and hence some form of interpolation is required to provide estimates in these areas. This paper assesses the topographic dependence of rainfall extremes by using elevation-dependent thin plate smoothing splines to interpolate the mean annual maximum rainfall, for periods from one to seven days, across Australia. The analyses confirm the important impact of topography in explaining the spatial patterns of these extreme rainfall statistics. Continent-wide residual and cross validation statistics are used to demonstrate the 100-fold impact of elevation in relation to horizontal coordinates in explaining the spatial patterns, consistent with previous rainfall scaling studies and observational evidence. The impact of the complexity of the fitted spline surfaces, as defined by the number of knots, and the impact of applying variance stabilising transformations to the data, were also assessed. It was found that a relatively large number of 3570 knots, suitably chosen from 8619 gauge locations, was required to minimise the summary error statistics. Square root and log data transformations were found to deliver marginally superior continent-wide cross validation statistics, in comparison to applying no data transformation, but detailed assessments of residuals in complex high rainfall regions with high topographic relief showed that no data transformation gave superior performance in these regions. These results are consistent with the understanding that in areas with modest topographic relief, as for most of the Australian continent, extreme rainfall is closely aligned with elevation, but in areas with high topographic relief the impacts of topography on rainfall extremes are more complex. The interpolated extreme rainfall statistics, using no data transformation, have been used by the Australian Bureau of Meteorology to produce new IDF data for the Australian continent. The comprehensive methods presented for the evaluation of gridded design rainfall statistics will be useful for similar studies, in particular the importance of balancing the need for a continentally-optimum solution that maintains sufficient definition at the local scale.

  15. Improved gravity anomaly fields from retracked multimission satellite radar altimetry observations over the Persian Gulf and the Caspian Sea

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Forootan, E.; Sharifi, M. A.; Awange, J.; Kuhn, M.

    2015-09-01

    Satellite radar altimetry observations are used to derive short wavelength gravity anomaly fields over the Persian Gulf and the Caspian Sea, where in situ and ship-borne gravity measurements have limited spatial coverage. In this study the retracking algorithm `Extrema Retracking' (ExtR) was employed to improve sea surface height (SSH) measurements that are highly biased in the study regions due to land contaminations in the footprints of the satellite altimetry observations. ExtR was applied to the waveforms sampled by the five satellite radar altimetry missions: TOPEX/POSEIDON, JASON-1, JASON-2, GFO and ERS-1. Along-track slopes have been estimated from the improved SSH measurements and used in an iterative process to estimate deflections of the vertical, and subsequently, the desired gravity anomalies. The main steps of the gravity anomaly computations involve estimating improved SSH using the ExtR technique, computing deflections of the vertical from interpolated SSHs on a regular grid using a biharmonic spline interpolation and finally estimating gridded gravity anomalies. A remove-compute-restore algorithm, based on the fast Fourier transform, has been applied to convert deflections of the vertical into gravity anomalies. Finally, spline interpolation has been used to estimate regular gravity anomaly grids over the two study regions. Results were evaluated by comparing the estimated altimetry-derived gravity anomalies (with and without implementing the ExtR algorithm) with ship-borne free air gravity anomaly observations, and free air gravity anomalies from the Earth Gravitational Model 2008 (EGM2008). The comparison indicates a range of 3-5 mGal in the residuals, which were computed by taking the differences between the retracked altimetry-derived gravity anomaly and the ship-borne data. The comparison of retracked data with ship-borne data indicates a range in the root-mean-square-error (RMSE) between approximately 1.8 and 4.4 mGal and a bias between 0.4062 and 2.1413 mGal over different areas. Also a maximum RMSE of 4.4069 mGal, with a mean value of 0.7615 mGal was obtained in the residuals. An average improvement of 5.2746 mGal in the RMSE of the altimetry-derived gravity anomalies corresponding to 89.9 per cent was obtained after applying the ExtR post-processing.

  16. Joint surface modeling with thin-plate splines.

    PubMed

    Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D

    1999-10-01

    Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.

  17. Generation of global VTEC maps from low latency GNSS observations based on B-spline modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte

    2015-04-01

    In May 2014 DGFI-TUM (the former DGFI) and the German Space Situational Awareness Centre (GSSAC) started to develop an OPerational Tool for Ionospheric Mapping And Prediction (OPTIMAP); since November 2014 the Institute of Astrophysics at the University of Göttingen (IAG) joined the group as the third partner. This project aims on the computation and prediction of maps of the vertical total electron content (VTEC) and the electron density distribution of the ionosphere on a global scale from both various space-geodetic observation techniques such as GNSS and satellite altimetry as well as Sun observations. In this contribution we present first results, i.e. a near-real time processing framework for generating VTEC maps by assimilating GNSS (GPS, GLONASS) based ionospheric data into a two-dimensional global B-spline approach. To be more specific, the spatial variations of VTEC are modelled by trigonometric B-spline functions in longitude and by endpoint-interpolating polynomial B-spline functions in latitude, respectively. Since B-spline functions are compactly supported and highly localizing our approach can handle large data gaps appropriately and, thus, provides a better approximation of data with heterogeneous density and quality compared to the commonly used spherical harmonics. The presented method models temporal variations of VTEC inside a Kalman filter. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. To approximate the temporal variation of these state vector components as part of the filter the dynamical model has to be set up. The current implementation of the filter allows to select between a random walk process, a Gauss-Markov process and a dynamic process driven by an empirical ionosphere model, e.g. the International Reference Ionosphere (IRI). For running the model ionospheric input data is acquired from terrestrial GNSS networks through online archive systems (such as IGS) with approximately one hour latency. Before feeding the filter with new hourly data, the raw GNSS observations are downloaded and pre-processed via geometry free linear combinations to provide signal delay information including the ionospheric effects and the differential code biases. Next steps will implement further space geodetic techniques and will introduce the Sun observations into the procedure. The final destination is to develop a time dependent model of the electron density based on different geodetic and solar observations.

  18. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  19. Optoelectronic imaging of speckle using image processing method

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  20. Biped Robot Gait Planning Based on 3D Linear Inverted Pendulum Model

    NASA Astrophysics Data System (ADS)

    Yu, Guochen; Zhang, Jiapeng; Bo, Wu

    2018-01-01

    In order to optimize the biped robot’s gait, the biped robot’s walking motion is simplify to the 3D linear inverted pendulum motion mode. The Center of Mass (CoM) locus is determined from the relationship between CoM and the Zero Moment Point (ZMP) locus. The ZMP locus is planned in advance. Then, the forward gait and lateral gait are simplified as connecting rod structure. Swing leg trajectory using B-spline interpolation. And the stability of the walking process is discussed in conjunction with the ZMP equation. Finally the system simulation is carried out under the given conditions to verify the validity of the proposed planning method.

  1. Intravesical dosimetry applied to laser positioning in photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Beslon, Guillaume; Ambroise, Philippe; Heit, Bernard; Bremont, Jacques; Guillemin, Francois H.

    1996-12-01

    Superficial bladder tumor is a challenging indication for photodynamic therapy. Due to lack of specificity of the sensitizers, the light has to be precisely monitored over the bladder surface, illuminated by an isotropic source, to restrict the cytotoxic effect to the tumor without affecting the normal epithelium. In order to assist the surgeon while processing the therapy, an urothelium illumination model is proposed. It is computed through a spline interpolation, on the basis of 12 intravesical sensors. This paper presents the overall system architecture and details the modelization and visualization processes. With this model, the surgeon is able to master the source displacement inside the bladder and to homogenize the tissue exposure.

  2. Numerical Manifold Method for the Forced Vibration of Thin Plates during Bending

    PubMed Central

    Jun, Ding; Song, Chen; Wei-Bin, Wen; Shao-Ming, Luo; Xia, Huang

    2014-01-01

    A novel numerical manifold method was derived from the cubic B-spline basis function. The new interpolation function is characterized by high-order coordination at the boundary of a manifold element. The linear elastic-dynamic equation used to solve the bending vibration of thin plates was derived according to the principle of minimum instantaneous potential energy. The method for the initialization of the dynamic equation and its solution process were provided. Moreover, the analysis showed that the calculated stiffness matrix exhibited favorable performance. Numerical results showed that the generalized degrees of freedom were significantly fewer and that the calculation accuracy was higher for the manifold method than for the conventional finite element method. PMID:24883403

  3. Resolution-enhancement and sampling error correction based on molecular absorption line in frequency scanning interferometry

    NASA Astrophysics Data System (ADS)

    Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating

    2018-06-01

    The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.

  4. Systems of Inhomogeneous Linear Equations

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  5. Equilibrium Spline Interface (ESI) for magnetic confinement codes

    NASA Astrophysics Data System (ADS)

    Li, Xujing; Zakharov, Leonid E.

    2017-12-01

    A compact and comprehensive interface between magneto-hydrodynamic (MHD) equilibrium codes and gyro-kinetic, particle orbit, MHD stability, and transport codes is presented. Its irreducible set of equilibrium data consists of three (in the 2-D case with occasionally one extra in the 3-D case) functions of coordinates and four 1-D radial profiles together with their first and mixed derivatives. The C reconstruction routines, accessible also from FORTRAN, allow the calculation of basis functions and their first derivatives at any position inside the plasma and in its vicinity. After this all vector fields and geometric coefficients, required for the above mentioned types of codes, can be calculated using only algebraic operations with no further interpolation or differentiation.

  6. Air quality mapping using GIS and economic evaluation of health impact for Mumbai City, India.

    PubMed

    Kumar, Awkash; Gupta, Indrani; Brandt, Jørgen; Kumar, Rakesh; Dikshit, Anil Kumar; Patil, Rashmi S

    2016-05-01

    Mumbai, a highly populated city in India, has been selected for air quality mapping and assessment of health impact using monitored air quality data. Air quality monitoring networks in Mumbai are operated by National Environment Engineering Research Institute (NEERI), Maharashtra Pollution Control Board (MPCB), and Brihanmumbai Municipal Corporation (BMC). A monitoring station represents air quality at a particular location, while we need spatial variation for air quality management. Here, air quality monitored data of NEERI and BMC were spatially interpolated using various inbuilt interpolation techniques of ArcGIS. Inverse distance weighting (IDW), Kriging (spherical and Gaussian), and spline techniques have been applied for spatial interpolation for this study. The interpolated results of air pollutants sulfur dioxide (SO2), nitrogen dioxide (NO2) and suspended particulate matter (SPM) were compared with air quality data of MPCB in the same region. Comparison of results showed good agreement for predicted values using IDW and Kriging with observed data. Subsequently, health impact assessment of a ward was carried out based on total population of the ward and air quality monitored data within the ward. Finally, health cost within a ward was estimated on the basis of exposed population. This study helps to estimate the valuation of health damage due to air pollution. Operating more air quality monitoring stations for measurement of air quality is highly resource intensive in terms of time and cost. The appropriate spatial interpolation techniques can be used to estimate concentration where air quality monitoring stations are not available. Further, health impact assessment for the population of the city and estimation of economic cost of health damage due to ambient air quality can help to make rational control strategies for environmental management. The total health cost for Mumbai city for the year 2012, with a population of 12.4 million, was estimated as USD8000 million.

  7. Cross-sectional time series and multivariate adaptive regression splines models using accelerometry and heart rate predict energy expenditure of preschoolers

    USDA-ARS?s Scientific Manuscript database

    Prediction equations of energy expenditure (EE) using accelerometers and miniaturized heart rate (HR) monitors have been developed in older children and adults but not in preschool-aged children. Because the relationships between accelerometer counts (ACs), HR, and EE are confounded by growth and ma...

  8. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  9. Design space construction of multiple dose-strength tablets utilizing bayesian estimation based on one set of design-of-experiments.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-01-01

    Design spaces for multiple dose strengths of tablets were constructed using a Bayesian estimation method with one set of design of experiments (DoE) of only the highest dose-strength tablet. The lubricant blending process for theophylline tablets with dose strengths of 100, 50, and 25 mg is used as a model manufacturing process in order to construct design spaces. The DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) for theophylline 100-mg tablet. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) of the 100-mg tablet were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. Three experiments under an optimal condition and two experiments under other conditions were performed using 50- and 25-mg tablets, respectively. The response surfaces of the highest-strength tablet were corrected to those of the lower-strength tablets by Bayesian estimation using the manufacturing data of the lower-strength tablets. Experiments under three additional sets of conditions of lower-strength tablets showed that the corrected design space made it possible to predict the quality of lower-strength tablets more precisely than the design space of the highest-strength tablet. This approach is useful for constructing design spaces of tablets with multiple strengths.

  10. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  11. Myocardial motion estimation of tagged cardiac magnetic resonance images using tag motion constraints and multi-level b-splines interpolation.

    PubMed

    Liu, Hong; Yan, Meng; Song, Enmin; Wang, Jie; Wang, Qian; Jin, Renchao; Jin, Lianghai; Hung, Chih-Cheng

    2016-05-01

    Myocardial motion estimation of tagged cardiac magnetic resonance (TCMR) images is of great significance in clinical diagnosis and the treatment of heart disease. Currently, the harmonic phase analysis method (HARP) and the local sine-wave modeling method (SinMod) have been proven as two state-of-the-art motion estimation methods for TCMR images, since they can directly obtain the inter-frame motion displacement vector field (MDVF) with high accuracy and fast speed. By comparison, SinMod has better performance over HARP in terms of displacement detection, noise and artifacts reduction. However, the SinMod method has some drawbacks: 1) it is unable to estimate local displacements larger than half of the tag spacing; 2) it has observable errors in tracking of tag motion; and 3) the estimated MDVF usually has large local errors. To overcome these problems, we present a novel motion estimation method in this study. The proposed method tracks the motion of tags and then estimates the dense MDVF by using the interpolation. In this new method, a parameter estimation procedure for global motion is applied to match tag intersections between different frames, ensuring specific kinds of large displacements being correctly estimated. In addition, a strategy of tag motion constraints is applied to eliminate most of errors produced by inter-frame tracking of tags and the multi-level b-splines approximation algorithm is utilized, so as to enhance the local continuity and accuracy of the final MDVF. In the estimation of the motion displacement, our proposed method can obtain a more accurate MDVF compared with the SinMod method and our method can overcome the drawbacks of the SinMod method. However, the motion estimation accuracy of our method depends on the accuracy of tag lines detection and our method has a higher time complexity. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    NASA Astrophysics Data System (ADS)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on major compilers suggests that this algorithm will be of wider applicability on most upcoming supercomputers.

  13. Evaluation of adaptive treatment planning for patients with non-small cell lung cancer

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Siddiqui, Salim M.; Movsas, Benjamin; Chetty, Indrin J.

    2017-06-01

    The purpose of this study was to develop metrics to evaluate uncertainties in deformable dose accumulation for patients with non-small cell lung cancer (NSCLC). Initial treatment plans (primary) and cone-beam CT (CBCT) images were retrospectively processed for seven NSCLC patients, who showed significant tumor regression during the course of treatment. Each plan was developed with IMRT for 2 Gy  ×  33 fractions. A B-spline-based DIR algorithm was used to register weekly CBCT images to a reference image acquired at fraction 21 and the resultant displacement vector fields (DVFs) were then modified using a finite element method (FEM). The doses were calculated on each of these CBCT images and mapped to the reference image using a tri-linear dose interpolation method, based on the B-spline and FEM-generated DVFs. Contours propagated from the planning image were adjusted to the residual tumor and OARs on the reference image to develop a secondary plan. For iso-prescription adaptive plans (relative to initial plans), mean lung dose (MLD) was reduced, on average from 17.3 Gy (initial plan) to 15.2, 14.5 and 14.8 Gy for the plans adapted using the rigid, B-Spline and FEM-based registrations. Similarly, for iso-toxic adaptive plans (considering MLD relative to initial plans) using the rigid, B-Spline and FEM-based registrations, the average doses were 69.9  ±  6.8, 65.7  ±  5.1 and 67.2  ±  5.6 Gy in the initial volume (PTV1), and 81.5  ±  25.8, 77.7  ±  21.6, and 78.9  ±  22.5 Gy in the residual volume (PTV21), respectively. Tumor volume reduction was correlated with dose escalation (for isotoxic plans, correlation coefficient  =  0.92), and with MLD reduction (for iso-fractional plans, correlation coefficient  =  0.85). For the case of the iso-toxic dose escalation, plans adapted with the B-Spline and FEM DVFs differed from the primary plan adapted with rigid registration by 2.8  ±  1.0 Gy and 1.8  ±  0.9 Gy in PTV1, and the mean difference between doses accumulated using the B-spline and FEM DVF’s was 1.1  ±  0.6 Gy. As a dose mapping-induced energy change, energy defect in the tumor volume was 20.8  ±  13.4% and 4.5  ±  2.4% for the B-spline and FEM-based dose accumulations, respectively. The energy defect of the B-Spline-based dose accumulation is significant in the tumor volume and highly correlated to the difference between the B-Spline and FEM-accumulated doses with their correlation coefficient equal to 0.79. Adaptive planning helps escalate target dose and spare normal tissue for patients with NSCLC, but deformable dose accumulation may have a significant loss of energy in regressed tumor volumes when using image intensity-based DIR algorithms. The metric of energy defect is a useful tool for evaluation of adaptive planning accuracy for lung cancer patients.

  14. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    EPA Science Inventory

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  15. Predicting Potential Changes in Suitable Habitat and Distribution by 2100 for Tree Species of the Eastern United States

    Treesearch

    Louis R Iverson; Anantha M. Prasad; Mark W. Schwartz; Mark W. Schwartz

    2005-01-01

    We predict current distribution and abundance for tree species present in eastern North America, and subsequently estimate potential suitable habitat for those species under a changed climate with 2 x CO2. We used a series of statistical models (i.e., Regression Tree Analysis (RTA), Multivariate Adaptive Regression Splines (MARS), Bagging Trees (...

  16. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  17. Study of cyanotoxins presence from experimental cyanobacteria concentrations using a new data mining methodology based on multivariate adaptive regression splines in Trasona reservoir (Northern Spain).

    PubMed

    Garcia Nieto, P J; Sánchez Lasheras, F; de Cos Juez, F J; Alonso Fernández, J R

    2011-11-15

    There is an increasing need to describe cyanobacteria blooms since some cyanobacteria produce toxins, termed cyanotoxins. These latter can be toxic and dangerous to humans as well as other animals and life in general. It must be remarked that the cyanobacteria are reproduced explosively under certain conditions. This results in algae blooms, which can become harmful to other species if the cyanobacteria involved produce cyanotoxins. In this research work, the evolution of cyanotoxins in Trasona reservoir (Principality of Asturias, Northern Spain) was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. The results of the present study are two-fold. On one hand, the importance of the different kind of cyanobacteria over the presence of cyanotoxins in the reservoir is presented through the MARS model and on the other hand a predictive model able to forecast the possible presence of cyanotoxins in a short term was obtained. The agreement of the MARS model with experimental data confirmed the good performance of the same one. Finally, conclusions of this innovative research are exposed. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. On the error propagation of semi-Lagrange and Fourier methods for advection problems☆

    PubMed Central

    Einkemmer, Lukas; Ostermann, Alexander

    2015-01-01

    In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018

  19. A New Multifunctional Sensor for Measuring Concentrations of Ternary Solution

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Shida, Katsunori

    This paper presents a multifunctional sensor with novel structure, which is capable of directly sensing temperature and two physical parameters of solutions, namely ultrasonic velocity and conductivity. By combined measurement of these three measurable parameters, the concentrations of various components in a ternary solution can be simultaneously determined. The structure and operation principle of the sensor are described, and a regression algorithm based on natural cubic spline interpolation and the least square method is adopted to estimate the concentrations. The performances of the proposed sensor are experimentally tested by the use of ternary aqueous solution of sodium chloride and sucrose, which is widely involved in food and beverage industries. This sensor could prove valuable as a process control sensor in industry fields.

  20. Spherical Demons: Fast Surface Registration

    PubMed Central

    Yeo, B.T. Thomas; Sabuncu, Mert; Vercauteren, Tom; Ayache, Nicholas; Fischl, Bruce; Golland, Polina

    2009-01-01

    We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently implemented on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast – registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces. PMID:18979813

  1. Spherical demons: fast surface registration.

    PubMed

    Yeo, B T Thomas; Sabuncu, Mert; Vercauteren, Tom; Ayache, Nicholas; Fischl, Bruce; Golland, Polina

    2008-01-01

    We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently implemented on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast - registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces.

  2. Moving magnets in a micromagnetic finite-difference framework

    NASA Astrophysics Data System (ADS)

    Rissanen, Ilari; Laurson, Lasse

    2018-05-01

    We present a method and an implementation for smooth linear motion in a finite-difference-based micromagnetic simulation code, to be used in simulating magnetic friction and other phenomena involving moving microscale magnets. Our aim is to accurately simulate the magnetization dynamics and relative motion of magnets while retaining high computational speed. To this end, we combine techniques for fast scalar potential calculation and cubic b-spline interpolation, parallelizing them on a graphics processing unit (GPU). The implementation also includes the possibility of explicitly simulating eddy currents in the case of conducting magnets. We test our implementation by providing numerical examples of stick-slip motion of thin films pulled by a spring and the effect of eddy currents on the switching time of magnetic nanocubes.

  3. ITA, a portable program for the interactive analysis of data from tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootton, R.; Ashley, K.

    ITA is a portable program for analyzing data from tracer experiments, most of the mathematical and graphical work being carried out by subroutines from the NAG and DASL libraries. The program can be used in batch or interactive mode, commands being typed in an English-like language, in free format. Data can be entered from a terminal keyboard or read from a file, and can be validated by printing or plotting them. Erroneous values can be corrected by appropriate editing. Analysis can involve elementary statistics, multiple-isotope crossover corrections, convolution or deconvolution, polyexponential curve-fitting, spline interpolation and/or compartmental analysis. On those installationsmore » with the appropriate hardware, high-resolution graphs can be drawn.« less

  4. Ambient occlusion effects for combined volumes and tubular geometry.

    PubMed

    Schott, Mathias; Martin, Tobias; Grosset, A V Pascal; Smith, Sean T; Hansen, Charles D

    2013-06-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  5. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    PubMed Central

    Schott, Mathias; Martin, Tobias; Grosset, A.V. Pascal; Smith, Sean T.; Hansen, Charles D.

    2013-01-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed. PMID:23559506

  6. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.

  7. Elliptic surface grid generation on minimal and parmetrized surfaces

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.

    1995-01-01

    An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.

  8. Uncertainty quantification of resonant ultrasound spectroscopy for material property and single crystal orientation estimation on a complex part

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack

    2018-04-01

    A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.

  9. Simulating patient-specific heart shape and motion using SPECT perfusion images with the MCAT phantom

    NASA Astrophysics Data System (ADS)

    Faber, Tracy L.; Garcia, Ernest V.; Lalush, David S.; Segars, W. Paul; Tsui, Benjamin M.

    2001-05-01

    The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.

  10. Issues and considerations for using the scalp surface Laplacian in EEG/ERP research: A tutorial review

    PubMed Central

    Kayser, Jürgen; Tenke, Craig E.

    2015-01-01

    Despite the recognition that the surface Laplacian may counteract adverse effects of volume conduction and recording reference for surface potential data, electrophysiology as a discipline has been reluctant to embrace this approach for data analysis. The reasons for such hesitation are manifold but often involve unfamiliarity with the nature of the underlying transformation, as well as intimidation by a perceived mathematical complexity, and concerns of signal loss, dense electrode array requirements, or susceptibility to noise. We revisit the pitfalls arising from volume conduction and the mandated arbitrary choice of EEG reference, describe the basic principle of the surface Laplacian transform in an intuitive fashion, and exemplify the differences between common reference schemes (nose, linked mastoids, average) and the surface Laplacian for frequently-measured EEG spectra (theta, alpha) and standard event-related potential (ERP) components, such as N1 or P3. We specifically review common reservations against the universal use of the surface Laplacian, which can be effectively addressed by employing spherical spline interpolations with an appropriate selection of the spline flexibility parameter and regularization constant. We argue from a pragmatic perspective that not only are these reservations unfounded but that the continued predominant use of surface potentials poses a considerable impediment on the progress of EEG and ERP research. PMID:25920962

  11. Hybrid modeling for dynamic analysis of cable-pulley systems with time-varying length cable and its application

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Qi, Zhaohui; Wang, Gang

    2017-10-01

    The dynamic analysis of cable-pulley systems is investigated in this paper, where the time-varying length characteristic of the cable as well as the coupling motion between the cable and the pulleys are considered. The dynamic model for cable-pulley systems are presented based on the principle of virtual power. Firstly, the cubic spline interpolation is adopted for modeling the flexible cable elements and the virtual 1powers of tensile strain, inertia and gravity forces on the cable are formulated. Then, the coupled motions between the cable and the movable or fixed pulley are described by the input and output contact points, based on the no-slip assumption and the spatial description. The virtual powers of inertia, gravity and applied forces on the contact segment of the cable, the movable and fixed pulleys are formulated. In particular, the internal node degrees of freedom of spline cable elements are reduced, which results in that only the independent description parameters of the nodes connected to the pulleys are included in the final governing dynamic equations. At last, two cable-pulley lifting mechanisms are considered as demonstrative application examples where the vibration of the lifting process is investigated. The comparison with ADAMS models is given to prove the validity of the proposed method.

  12. Application of thin-plate spline transformations to finite element models, or, how to turn a bog turtle into a spotted turtle to analyze both.

    PubMed

    Stayton, C Tristan

    2009-05-01

    Finite element (FE) models are popular tools that allow biologists to analyze the biomechanical behavior of complex anatomical structures. However, the expense and time required to create models from specimens has prevented comparative studies from involving large numbers of species. A new method is presented for transforming existing FE models using geometric morphometric methods. Homologous landmark coordinates are digitized on the FE model and on a target specimen into which the FE model is being transformed. These coordinates are used to create a thin-plate spline function and coefficients, which are then applied to every node in the FE model. This function smoothly interpolates the location of points between landmarks, transforming the geometry of the original model to match the target. This new FE model is then used as input in FE analyses. This procedure is demonstrated with turtle shells: a Glyptemys muhlenbergii model is transformed into Clemmys guttata and Actinemys marmorata models. Models are loaded and the resulting stresses are compared. The validity of the models is tested by crushing actual turtle shells in a materials testing machine and comparing those results to predictions from FE models. General guidelines, cautions, and possibilities for this procedure are also presented.

  13. Estimation of Posterior Probabilities Using Multivariate Smoothing Splines and Generalized Cross-Validation.

    DTIC Science & Technology

    1983-09-01

    Ciencia y Tecnologia -Mexico, by ONR under Contract No. N00014-77-C-0675, and by ARO under Contract No. DAAG29-80-K-0042. LUJ THE VIE~W, rTIJ. ’~v ’’~c...Department of Statis- tics. For financial support I thank the Consejo Nacional de Ciencia y Tecnologia - Mexico, and the Department of Statistics of the

  14. Vulnerability of carbon storage in North American boreal forests to wildfires during the 21st century

    Treesearch

    M.S. Balshi; A.D. McGuire; P. Duffy; M. Flannigan; D.W. Kicklighter; J. Melillo

    2009-01-01

    We use a gridded data set developed with a multivariate adaptive regression spline approach to determine how area burned varies each year with changing climatic and fuel moisture conditions. We apply the process-based Terrestrial Ecosystem Model to evaluate the role of future fire on the carbon dynamics of boreal North America in the context of changing atmospheric...

  15. Multivariate optimum interpolation of surface pressure and winds over oceans

    NASA Technical Reports Server (NTRS)

    Bloom, S. C.

    1984-01-01

    The observations of surface pressure are quite sparse over oceanic areas. An effort to improve the analysis of surface pressure over oceans through the development of a multivariate surface analysis scheme which makes use of surface pressure and wind data is discussed. Although the present research used ship winds, future versions of this analysis scheme could utilize winds from additional sources, such as satellite scatterometer data.

  16. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    NASA Technical Reports Server (NTRS)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.

  17. Multivariate Adaptive Regression Splines (Preprint)

    DTIC Science & Technology

    1990-08-01

    fold cross -validation would take about ten time as long, and MARS is not all that fast to begin with. Friedman has a number of examples showing...standardized mean squared error of prediction (MSEP), the generalized cross validation (GCV), and the number of selected terms (TERMS). In accordance with...and mi= 10 case were almost exclusively spurious cross product terms and terms involving the nuisance variables x6 through xlo. This large number of

  18. TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1994-01-01

    TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.

  19. An ab initio global potential-energy surface for NH2(A(2)A') and vibrational spectrum of the Renner-Teller A(2)A'-X(2)A" system.

    PubMed

    Zhou, Shulan; Li, Zheng; Xie, Daiqian; Lin, Shi Ying; Guo, Hua

    2009-05-14

    A global potential-energy surface for the first excited electronic state of NH(2)(A(2)A(')) has been constructed by three-dimensional cubic spline interpolation of more than 20,000 ab initio points, which were calculated at the multireference configuration-interaction level with the Davidson correction using the augmented correlation-consistent polarized valence quadruple-zeta basis set. The (J=0) vibrational energy levels for the ground (X(2)A(")) and excited (A(2)A(')) electronic states of NH(2) were calculated on our potential-energy surfaces with the diagonal Renner-Teller terms. The results show a good agreement with the experimental vibrational frequencies of NH(2) and its isotopomers.

  20. ADMAP (automatic data manipulation program)

    NASA Technical Reports Server (NTRS)

    Mann, F. I.

    1971-01-01

    Instructions are presented on the use of ADMAP, (automatic data manipulation program) an aerospace data manipulation computer program. The program was developed to aid in processing, reducing, plotting, and publishing electric propulsion trajectory data generated by the low thrust optimization program, HILTOP. The program has the option of generating SC4020 electric plots, and therefore requires the SC4020 routines to be available at excution time (even if not used). Several general routines are present, including a cubic spline interpolation routine, electric plotter dash line drawing routine, and single parameter and double parameter sorting routines. Many routines are tailored for the manipulation and plotting of electric propulsion data, including an automatic scale selection routine, an automatic curve labelling routine, and an automatic graph titling routine. Data are accepted from either punched cards or magnetic tape.

  1. High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform

    PubMed Central

    Chan, Kenny K. H.; Tang, Shuo

    2010-01-01

    The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551

  2. Investigation of digital timing resolution and further improvement by using constant fraction signal time marker slope for fast scintillator detectors

    NASA Astrophysics Data System (ADS)

    Singh, Kundan; Siwal, Davinder

    2018-04-01

    A digital timing algorithm is explored for fast scintillator detectors, viz. LaBr3, BaF2, and BC501A. Signals were collected with CAEN 250 mega samples per second (MSPS) and 500 MSPS digitizers. The zero crossing time markers (TM) were obtained with a standard digital constant fraction timing (DCF) method. Accurate timing information is obtained using cubic spline interpolation of a DCF transient region sample points. To get the best time-of-flight (TOF) resolution, an optimization of DCF parameters is performed (delay and constant fraction) for each pair of detectors: (BaF2-LaBr3), (BaF2-BC501A), and (LaBr3-BC501A). In addition, the slope information of an interpolated DCF signal is extracted at TM position. This information gives a new insight to understand the broadening in TOF, obtained for a given detector pair. For a pair of signals having small relative slope and interpolation deviations at TM, leads to minimum time broadening. However, the tailing in TOF spectra is dictated by the interplay between the interpolation error and slope variations. Best TOF resolution achieved at the optimum DCF parameters, can be further improved by using slope parameter. Guided by the relative slope parameter, events selection can be imposed which leads to reduction in TOF broadening. While the method sets a trade-off between timing response and coincidence efficiency, it provides an improvement in TOF. With the proposed method, the improved TOF resolution (FWHM) for the aforementioned detector pairs are; 25% (0.69 ns), 40% (0.74 ns), 53% (0.6 ns) respectively, obtained with 250 MSPS, and corresponds to 12% (0.37 ns), 33% (0.72 ns), 35% (0.69 ns) respectively with 500 MSPS digitizers. For the same detector pair, event survival probabilities are; 57%, 58%, 51% respectively with 250 MSPS and becomes 63%, 57%, 68% using 500 MSPS digitizers.

  3. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  4. Pearson correlation estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.

  5. Short term spatio-temporal variability of soil water-extractable calcium and magnesium after a low severity grassland fire in Lithuania.

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Martin, David

    2014-05-01

    Fire has important impacts on soil nutrient spatio-temporal distribution (Outeiro et al., 2008). This impact depends on fire severity, topography of the burned area, type of soil and vegetation affected, and the meteorological conditions post-fire. Fire produces a complex mosaic of impacts in soil that can be extremely variable at small plot scale in the space and time. In order to assess and map such a heterogeneous distribution, the test of interpolation methods is fundamental to identify the best estimator and to have a better understanding of soil nutrients spatial distribution. The objective of this work is to identify the short-term spatial variability of water-extractable calcium and magnesium after a low severity grassland fire. The studied area is located near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. Four days after the fire, it was designed in a burned area a plot with 400 m2 (20 x 20 m with 5 m space between sampling points). Twenty five samples from top soil (0-5 cm) were collected immediately after the fire (IAF), 2, 5, 7 and 9 months after the fire (a total of 125 in all sampling dates). The original data of water-extractable calcium and magnesium did not respected the Gaussian distribution, thus a neperian logarithm (ln) was applied in order to normalize data. Significant differences of water-extractable calcium and magnesium among sampling dates were carried out with the Anova One-way test using the ln data. In order to assess the spatial variability of water-extractable calcium and magnesium, we tested several interpolation methods as Ordinary Kriging (OK), Inverse Distance to a Weight (IDW) with the power of 1, 2, 3 and 4, Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multilog (MTG), Multiquadratic (MTQ) Natural Cubic Spline (NCS) and Thin Plate Spline (TPS) - and Local Polynomial (LP) with the power of 1 and 2. Interpolation tests were carried out with Ln data. The best interpolation method was assessed using the cross validation method. Cross-validation was obtained by taking each observation in turn out of the sample pool and estimating from the remaining ones. The errors produced (observed-predicted) are used to evaluate the performance of each method. With these data, the mean error (ME) and root mean square error (RMSE) were calculated. The best method was the one which had the lower RMSE (Pereira et al. in press). The results shown significant differences among sampling dates in the water-extractable calcium (F= 138.78, p< 0.001) and extractable magnesium (F= 160.66; p< 0.001). Water-extractable calcium and magnesium was high IAF decreasing until 7 months after the fire, rising in the last sampling date. Among the tested methods, the most accurate to interpolate the water-extractable calcium were: IAF-IDW1; 2 Months-IDW1; 5 months-OK; 7 Months-IDW4 and 9 Months-IDW3. In relation to water-extractable magnesium the best interpolation techniques were: IAF-IDW2; 2 Months-IDW1; 5 months- IDW3; 7 Months-TPS and 9 Months-IDW1. These results suggested that the spatial variability of these water-extractable is variable with the time. The causes of this variability will be discussed during the presentation. References Outeiro, L., Aspero, F., Ubeda, X. (2008) Geostatistical methods to study spatial variability of soil cation after a prescribed fire and rainfall. Catena, 74: 310-320. Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J. Arcenegui, V., Zavala, L. Modelling the impacts of wildfire on ash thickness in a short-term period, Land Degradation and Development, (In Press), DOI: 10.1002/ldr.2195

  6. Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines

    NASA Astrophysics Data System (ADS)

    Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2016-10-01

    Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th

  7. Validation of DWI pre-processing procedures for reliable differentiation between human brain gliomas.

    PubMed

    Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.

  8. Three-dimensional data interpolation for environmental purpose: lead in contaminated soils in southern Brazil.

    PubMed

    Piedade, Tales Campos; Melo, Vander Freitas; Souza, Luiz Cláudio Paula; Dieckow, Jeferson

    2014-09-01

    Monitoring of heavy metal contamination plume in soils can be helpful in establishing strategies to minimize its hazardous impacts to the environment. The objective of this study was to apply a new approach of visualization, based on tridimensional (3D) images, of pseudo-total (extracted with concentrated acids) and exchangeable (extracted with 0.5 mol L(-1) Ca(NO3)2) lead (Pb) concentrations in soils of a mining and metallurgy area to determine the spatial distribution of this pollutant and to estimate the most contaminated soil volumes. Tridimensional images were obtained after interpolation of Pb concentrations of 171 soil samples (57 points × 3 depths) with regularized spline with tension in a 3D function version. The tridimensional visualization showed great potential of use in environmental studies and allowed to determine the spatial 3D distribution of Pb contamination plume in the area and to establish relationships with soil characteristics, landscape, and pollution sources. The most contaminated soil volumes (10,001 to 52,000 mg Pb kg(-1)) occurred near the metallurgy factory. The main contamination sources were attributed to atmospheric emissions of particulate Pb through chimneys. The large soil volume estimated to be removed to industrial landfills or co-processing evidenced the difficulties related to this practice as a remediation strategy.

  9. Heuristic estimation of electromagnetically tracked catheter shape for image-guided vascular procedures

    NASA Astrophysics Data System (ADS)

    Mefleh, Fuad N.; Baker, G. Hamilton; Kwartowitz, David M.

    2014-03-01

    In our previous work we presented a novel image-guided surgery (IGS) system, Kit for Navigation by Image Focused Exploration (KNIFE).1,2 KNIFE has been demonstrated to be effective in guiding mock clinical procedures with the tip of an electromagnetically tracked catheter overlaid onto a pre-captured bi-plane fluoroscopic loop. Representation of the catheter in KNIFE differs greatly from what is captured by the fluoroscope, due to distortions and other properties of fluoroscopic images. When imaged by a fluoroscope, catheters can be visualized due to the inclusion of radiopaque materials (i.e. Bi, Ba, W) in the polymer blend.3 However, in KNIFE catheter location is determined using a single tracking seed located in the catheter tip that is represented as a single point overlaid on pre-captured fluoroscopic images. To bridge the gap in catheter representation between KNIFE and traditional methods we constructed a catheter with five tracking seeds positioned along the distal 70 mm of the catheter. We have currently investigated the use of four spline interpolation methods for estimation of true catheter shape and have assesed the error in their estimation of true catheter shape. In this work we present a method for the evaluation of interpolation algorithms with respect to catheter shape determination.

  10. Spectral Topography Generation for Arbitrary Grids

    NASA Astrophysics Data System (ADS)

    Oh, T. J.

    2015-12-01

    A new topography generation tool utilizing spectral transformation technique for both structured and unstructured grids is presented. For the source global digital elevation data, the NASA Shuttle Radar Topography Mission (SRTM) 15 arc-second dataset (gap-filling by Jonathan de Ferranti) is used and for land/water mask source, the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) 30 arc-second land water mask dataset v5 is used. The original source data is coarsened to a intermediate global 2 minute lat-lon mesh. Then, spectral transformation to the wave space and inverse transformation with wavenumber truncation is performed for isotropic topography smoothness control. Target grid topography mapping is done by bivariate cubic spline interpolation from the truncated 2 minute lat-lon topography. Gibbs phenomenon in the water region can be removed by overwriting ocean masked target coordinate grids with interpolated values from the intermediate 2 minute grid. Finally, a weak smoothing operator is applied on the target grid to minimize the land/water surface height discontinuity that might have been introduced by the Gibbs oscillation removal procedure. Overall, the new topography generation approach provides spectrally-derived, smooth topography with isotropic resolution and minimum damping, enabling realistic topography forcing in the numerical model. Topography is generated for the cubed-sphere grid and tested on the KIAPS Integrated Model (KIM).

  11. Incompressible Deformation Estimation Algorithm (IDEA) from Tagged MR Images

    PubMed Central

    Liu, Xiaofeng; Abd-Elmoniem, Khaled Z.; Stone, Maureen; Murano, Emi Z.; Zhuo, Jiachen; Gullapalli, Rao P.; Prince, Jerry L.

    2013-01-01

    Measuring the three-dimensional motion of muscular tissues, e.g., the heart or the tongue, using magnetic resonance (MR) tagging is typically carried out by interpolating the two-dimensional motion information measured on orthogonal stacks of images. The incompressibility of muscle tissue is an important constraint on the reconstructed motion field and can significantly help to counter the sparsity and incompleteness of the available motion information. Previous methods utilizing this fact produced incompressible motions with limited accuracy. In this paper, we present an incompressible deformation estimation algorithm (IDEA) that reconstructs a dense representation of the three-dimensional displacement field from tagged MR images and the estimated motion field is incompressible to high precision. At each imaged time frame, the tagged images are first processed to determine components of the displacement vector at each pixel relative to the reference time. IDEA then applies a smoothing, divergence-free, vector spline to interpolate velocity fields at intermediate discrete times such that the collection of velocity fields integrate over time to match the observed displacement components. Through this process, IDEA yields a dense estimate of a three-dimensional displacement field that matches our observations and also corresponds to an incompressible motion. The method was validated with both numerical simulation and in vivo human experiments on the heart and the tongue. PMID:21937342

  12. Issues and considerations for using the scalp surface Laplacian in EEG/ERP research: A tutorial review.

    PubMed

    Kayser, Jürgen; Tenke, Craig E

    2015-09-01

    Despite the recognition that the surface Laplacian may counteract adverse effects of volume conduction and recording reference for surface potential data, electrophysiology as a discipline has been reluctant to embrace this approach for data analysis. The reasons for such hesitation are manifold but often involve unfamiliarity with the nature of the underlying transformation, as well as intimidation by a perceived mathematical complexity, and concerns of signal loss, dense electrode array requirements, or susceptibility to noise. We revisit the pitfalls arising from volume conduction and the mandated arbitrary choice of EEG reference, describe the basic principle of the surface Laplacian transform in an intuitive fashion, and exemplify the differences between common reference schemes (nose, linked mastoids, average) and the surface Laplacian for frequently-measured EEG spectra (theta, alpha) and standard event-related potential (ERP) components, such as N1 or P3. We specifically review common reservations against the universal use of the surface Laplacian, which can be effectively addressed by employing spherical spline interpolations with an appropriate selection of the spline flexibility parameter and regularization constant. We argue from a pragmatic perspective that not only are these reservations unfounded but that the continued predominant use of surface potentials poses a considerable impediment on the progress of EEG and ERP research. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. RGB Color Calibration for Quantitative Image Analysis: The “3D Thin-Plate Spline” Warping Approach

    PubMed Central

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data. PMID:22969337

  14. An operator calculus for surface and volume modeling

    NASA Technical Reports Server (NTRS)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  15. Multivariate optimum interpolation of surface pressure and surface wind over oceans

    NASA Technical Reports Server (NTRS)

    Bloom, S. C.; Baker, W. E.; Nestler, M. S.

    1984-01-01

    The present multivariate analysis method for surface pressure and winds incorporates ship wind observations into the analysis of surface pressure. For the specific case of 0000 GMT, on February 3, 1979, the additional data resulted in a global rms difference of 0.6 mb; individual maxima as larse as 5 mb occurred over the North Atlantic and East Pacific Oceans. These differences are noted to be smaller than the analysis increments to the first-guess fields.

  16. Optimization of pressure probe placement and data analysis of engine-inlet distortion

    NASA Astrophysics Data System (ADS)

    Walter, S. F.

    The purpose of this research is to examine methods by which quantification of inlet flow distortion may be improved upon. Specifically, this research investigates how data interpolation effects results, optimizing sampling locations of the flow, and determining the sensitivity related to how many sample locations there are. The main parameters that are indicative of a "good" design are total pressure recovery, mass flow capture, and distortion. This work focuses on the total pressure distortion, which describes the amount of non-uniformity that exists in the flow as it enters the engine. All engines must tolerate some level of distortion, however too much distortion can cause the engine to stall or the inlet to unstart. Flow distortion is measured at the interface between the inlet and the engine. To determine inlet flow distortion, a combination of computational and experimental pressure data is generated and then collapsed into an index that indicates the amount of distortion. Computational simulations generate continuous contour maps, but experimental data is discrete. Researchers require continuous contour maps to evaluate the overall distortion pattern. There is no guidance on how to best manipulate discrete points into a continuous pattern. Using one experimental, 320 probe data set and one, 320 point computational data set with three test runs each, this work compares the pressure results obtained using all 320 points of data from the original sets, both quantitatively and qualitatively, with results derived from selecting 40 grid point subsets and interpolating to 320 grid points. Each of the two, 40 point sets were interpolated to 320 grid points using four different interpolation methods in an attempt to establish the best method for interpolating small sets of data into an accurate, continuous contour map. Interpolation methods investigated are bilinear, spline, and Kriging in Cartesian space, as well as angular in polar space. Spline interpolation methods should be used as they result in the most accurate, precise, and visually correct predictions when compared results achieved from the full data sets. Researchers were interested if fewer than the recommended 40 probes could be used - especially when placed in areas of high interest - but still obtain equivalent or better results. For this investigation, the computational results from a two-dimensional inlet and experimental results of an axisymmetric inlet were used. To find the areas of interest, a uniform sampling of all possible locations was run through a Monte Carlo simulation with a varying number of probes. A probability density function of the resultant distortion index was plotted. Certain probes are required to come within the desired accuracy level of the distortion index based on the full data set. For the experimental results, all three test cases could be characterized with 20 probes. For the axisymmetric inlet, placing 40 probes in select locations could get the results for parameters of interest within less than 10% of the exact solution for almost all cases. For the two dimensional inlet, the results were not as clear. 80 probes were required to get within 10% of the exact solution for all run numbers, although this is largely due to the small value of the exact result. The sensitivity of each probe added to the experiment was analyzed. Instead of looking at the overall pattern established by optimizing probe placements, the focus is on varying the number of sampled probes from 20 to 40. The number of points falling within a 1% tolerance band of the exact solution were counted as good points. The results were normalized for each data set and a general sensitivity function was found to determine the sensitivity of the results. A linear regression was used to generalize the results for all data sets used in this work. However, they can be used by directly comparing the number of good points obtained with various numbers of probes as well. The sensitivity in the results is higher when fewer probes are used and gradually tapers off near 40 probes. There is a bigger gain in good points when the number of probes is increased from 20 to 21 probes than from 39 to 40 probes.

  17. Multivariate meta-analysis for non-linear and other multi-parameter associations

    PubMed Central

    Gasparrini, A; Armstrong, B; Kenward, M G

    2012-01-01

    In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043

  18. The GPRIME approach to finite element modeling

    NASA Technical Reports Server (NTRS)

    Wallace, D. R.; Mckee, J. H.; Hurwitz, M. M.

    1983-01-01

    GPRIME, an interactive modeling system, runs on the CDC 6000 computers and the DEC VAX 11/780 minicomputer. This system includes three components: (1) GPRIME, a user friendly geometric language and a processor to translate that language into geometric entities, (2) GGEN, an interactive data generator for 2-D models; and (3) SOLIDGEN, a 3-D solid modeling program. Each component has a computer user interface of an extensive command set. All of these programs make use of a comprehensive B-spline mathematics subroutine library, which can be used for a wide variety of interpolation problems and other geometric calculations. Many other user aids, such as automatic saving of the geometric and finite element data bases and hidden line removal, are available. This interactive finite element modeling capability can produce a complete finite element model, producing an output file of grid and element data.

  19. Scientific Benchmarks for Guiding Macromolecular Energy Function Improvement

    PubMed Central

    Leaver-Fay, Andrew; O’Meara, Matthew J.; Tyka, Mike; Jacak, Ron; Song, Yifan; Kellogg, Elizabeth H.; Thompson, James; Davis, Ian W.; Pache, Roland A.; Lyskov, Sergey; Gray, Jeffrey J.; Kortemme, Tanja; Richardson, Jane S.; Havranek, James J.; Snoeyink, Jack; Baker, David; Kuhlman, Brian

    2013-01-01

    Accurate energy functions are critical to macromolecular modeling and design. We describe new tools for identifying inaccuracies in energy functions and guiding their improvement, and illustrate the application of these tools to improvement of the Rosetta energy function. The feature analysis tool identifies discrepancies between structures deposited in the PDB and low energy structures generated by Rosetta; these likely arise from inaccuracies in the energy function. The optE tool optimizes the weights on the different components of the energy function by maximizing the recapitulation of a wide range of experimental observations. We use the tools to examine three proposed modifications to the Rosetta energy function: improving the unfolded state energy model (reference energies), using bicubic spline interpolation to generate knowledge based torisonal potentials, and incorporating the recently developed Dunbrack 2010 rotamer library (Shapovalov and Dunbrack, 2011). PMID:23422428

  20. Using electrical impedance to predict catheter-endocardial contact during RF cardiac ablation.

    PubMed

    Cao, Hong; Tungjitkusolmun, Supan; Choy, Young Bin; Tsai, Jang-Zern; Vorperian, Vicken R; Webster, John G

    2002-03-01

    During radio-frequency (RF) cardiac catheter ablation, there is little information to estimate the contact between the catheter tip electrode and endocardium because only the metal electrode shows up under fluoroscopy. We present a method that utilizes the electrical impedance between the catheter electrode and the dispersive electrode to predict the catheter tip electrode insertion depth into the endocardium. Since the resistivity of blood differs from the resistivity of the endocardium, the impedance increases as the catheter tip lodges deeper in the endocardium. In vitro measurements yielded the impedance-depth relations at 1, 10, 100, and 500 kHz. We predict the depth by spline curve interpolation using the obtained calibration curve. This impedance method gives reasonably accurate predicted depth. We also evaluated alternative methods, such as impedance difference and impedance ratio.

  1. Empirical wind model for the middle and lower atmosphere. Part 2: Local time variations

    NASA Technical Reports Server (NTRS)

    Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Clark, R. R.; Franke, S. J.; Fraser, G. J.; Tsuda, T.; Vial, F.

    1993-01-01

    The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Local time variations in the mesosphere are derived from rocket soundings, incoherent scatter radar, MF radar, and meteor radar. Low-order spherical harmonics and Fourier series are used to describe these variations as a function of latitude and day of year with cubic spline interpolation in altitude. The model represents a smoothed compromise between the original data sources. Although agreement between various data sources is generally good, some systematic differences are noted. Overall root mean square differences between measured and model tidal components are on the order of 5 to 10 m/s.

  2. Estimating trajectories of energy intake through childhood and adolescence using linear-spline multilevel models.

    PubMed

    Anderson, Emma L; Tilling, Kate; Fraser, Abigail; Macdonald-Wallis, Corrie; Emmett, Pauline; Cribb, Victoria; Northstone, Kate; Lawlor, Debbie A; Howe, Laura D

    2013-07-01

    Methods for the assessment of changes in dietary intake across the life course are underdeveloped. We demonstrate the use of linear-spline multilevel models to summarize energy-intake trajectories through childhood and adolescence and their application as exposures, outcomes, or mediators. The Avon Longitudinal Study of Parents and Children assessed children's dietary intake several times between ages 3 and 13 years, using both food frequency questionnaires (FFQs) and 3-day food diaries. We estimated energy-intake trajectories for 12,032 children using linear-spline multilevel models. We then assessed the associations of these trajectories with maternal body mass index (BMI), and later offspring BMI, and also their role in mediating the relation between maternal and offspring BMIs. Models estimated average and individual energy intake at 3 years, and linear changes in energy intake from age 3 to 7 years and from age 7 to 13 years. By including the exposure (in this example, maternal BMI) in the multilevel model, we were able to estimate the average energy-intake trajectories across levels of the exposure. When energy-intake trajectories are the exposure for a later outcome (in this case offspring BMI) or a mediator (between maternal and offspring BMI), results were similar, whether using a two-step process (exporting individual-level intercepts and slopes from multilevel models and using these in linear regression/path analysis), or a single-step process (multivariate multilevel models). Trajectories were similar when FFQs and food diaries were assessed either separately, or when combined into one model. Linear-spline multilevel models provide useful summaries of trajectories of dietary intake that can be used as an exposure, outcome, or mediator.

  3. Spatial and spectral interpolation of ground-motion intensity measure observations

    USGS Publications Warehouse

    Worden, Charles; Thompson, Eric M.; Baker, Jack W.; Bradley, Brendon A.; Luco, Nicolas; Wilson, David

    2018-01-01

    Following a significant earthquake, ground‐motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground‐motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground‐motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations.

  4. Space Weather Activities of IONOLAB Group: TEC Mapping

    NASA Astrophysics Data System (ADS)

    Arikan, F.; Yilmaz, A.; Arikan, O.; Sayin, I.; Gurun, M.; Akdogan, K. E.; Yildirim, S. A.

    2009-04-01

    Being a key player in Space Weather, ionospheric variability affects the performance of both communication and navigation systems. To improve the performance of these systems, ionosphere has to be monitored. Total Electron Content (TEC), line integral of the electron density along a ray path, is an important parameter to investigate the ionospheric variability. A cost-effective way of obtaining TEC is by using dual-frequency GPS receivers. Since these measurements are sparse in space, accurate and robust interpolation techniques are needed to interpolate (or map) the TEC distribution for a given region in space. However, the TEC data derived from GPS measurements contain measurement noise, model and computational errors. Thus, it is necessary to analyze the interpolation performance of the techniques on synthetic data sets that can represent various ionospheric states. By this way, interpolation performance of the techniques can be compared over many parameters that can be controlled to represent the desired ionospheric states. In this study, Multiquadrics, Inverse Distance Weighting (IDW), Cubic Splines, Ordinary and Universal Kriging, Random Field Priors (RFP), Multi-Layer Perceptron Neural Network (MLP-NN), and Radial Basis Function Neural Network (RBF-NN) are employed as the spatial interpolation algorithms. These mapping techniques are initially tried on synthetic TEC surfaces for parameter and coefficient optimization and determination of error bounds. Interpolation performance of these methods are compared on synthetic TEC surfaces over the parameters of sampling pattern, number of samples, the variability of the surface and the trend type in the TEC surfaces. By examining the performance of the interpolation methods, it is observed that both Kriging, RFP and NN have important advantages and possible disadvantages depending on the given constraints. It is also observed that the determining parameter in the error performance is the trend in the Ionosphere. Optimization of the algorithms in terms of their performance parameters (like the choice of the semivariogram function for Kriging algorithms and the hidden layer and neuron numbers for MLP-NN) mostly depend on the behavior of the ionosphere at that given time instant for the desired region. The sampling pattern and number of samples are the other important parameters that may contribute to the higher errors in reconstruction. For example, for all of the above listed algorithms, hexagonal regular sampling of the ionosphere provides the lowest reconstruction error and the performance significantly degrades as the samples in the region become sparse and clustered. The optimized models and coefficients are applied to regional GPS-TEC mapping using the IONOLAB-TEC data (www.ionolab.org). Both Kriging combined with Kalman Filter and dynamic modeling of NN are also implemented as first trials of TEC and space weather predictions.

  5. The Grand Tour via Geodesic Interpolation of 2-frames

    NASA Technical Reports Server (NTRS)

    Asimov, Daniel; Buja, Andreas

    1994-01-01

    Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.

  6. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim

    2013-03-15

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less

  7. Using geographical information systems and cartograms as a health service quality improvement tool.

    PubMed

    Lovett, Derryn A; Poots, Alan J; Clements, Jake T C; Green, Stuart A; Samarasundera, Edgar; Bell, Derek

    2014-07-01

    Disease prevalence can be spatially analysed to provide support for service implementation and health care planning, these analyses often display geographic variation. A key challenge is to communicate these results to decision makers, with variable levels of Geographic Information Systems (GIS) knowledge, in a way that represents the data and allows for comprehension. The present research describes the combination of established GIS methods and software tools to produce a novel technique of visualising disease admissions and to help prevent misinterpretation of data and less optimal decision making. The aim of this paper is to provide a tool that supports the ability of decision makers and service teams within health care settings to develop services more efficiently and better cater to the population; this tool has the advantage of information on the position of populations, the size of populations and the severity of disease. A standard choropleth of the study region, London, is used to visualise total emergency admission values for Chronic Obstructive Pulmonary Disease and bronchiectasis using ESRI's ArcGIS software. Population estimates of the Lower Super Output Areas (LSOAs) are then used with the ScapeToad cartogram software tool, with the aim of visualising geography at uniform population density. An interpolation surface, in this case ArcGIS' spline tool, allows the creation of a smooth surface over the LSOA centroids for admission values on both standard and cartogram geographies. The final product of this research is the novel Cartogram Interpolation Surface (CartIS). The method provides a series of outputs culminating in the CartIS, applying an interpolation surface to a uniform population density. The cartogram effectively equalises the population density to remove visual bias from areas with a smaller population, while maintaining contiguous borders. CartIS decreases the number of extreme positive values not present in the underlying data as can be found in interpolation surfaces. This methodology provides a technique for combining simple GIS tools to create a novel output, CartIS, in a health service context with the key aim of improving visualisation communication techniques which highlight variation in small scale geographies across large regions. CartIS more faithfully represents the data than interpolation, and visually highlights areas of extreme value more than cartograms, when either is used in isolation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  9. Computational Methods for Inviscid and Viscous Two-and-Three-Dimensional Flow Fields.

    DTIC Science & Technology

    1975-01-01

    Difference Equations Over a Network, Watson Sei. Comput. Lab. Report, 19U9. 173- Isaacson, E. and Keller, H. B., Analaysis of Numerical Methods...element method has given a new impulse to the old mathematical theory of multivariate interpolation. We first study the one-dimensional case, which

  10. Repeatability and reproducibility of ribotyping and its computer interpretation.

    PubMed

    Lefresne, Gwénola; Latrille, Eric; Irlinger, Françoise; Grimont, Patrick A D

    2004-04-01

    Many molecular typing methods are difficult to interpret because their repeatability (within-laboratory variance) and reproducibility (between-laboratory variance) have not been thoroughly studied. In the present work, ribotyping of coryneform bacteria was the basis of a study involving within-gel and between-gel repeatability and between-laboratory reproducibility (two laboratories involved). The effect of different technical protocols, different algorithms, and different software for fragment size determination was studied. Analysis of variance (ANOVA) showed, within a laboratory, that there was no significant added variance between gels. However, between-laboratory variance was significantly higher than within-laboratory variance. This may be due to the use of different protocols. An experimental function was calculated to transform the data and make them compatible (i.e., erase the between-laboratory variance). The use of different interpolation algorithms (spline, Schaffer and Sederoff) was a significant source of variation in one laboratory only. The use of either Taxotron (Institut Pasteur) or GelCompar (Applied Maths) was not a significant source of added variation when the same algorithm (spline) was used. However, the use of Bio-Gene (Vilber Lourmat) dramatically increased the error (within laboratory, within gel) in one laboratory, while decreasing the error in the other laboratory; this might be due to automatic normalization attempts. These results were taken into account for building a database and performing automatic pattern identification using Taxotron. Conversion of the data considerably improved the identification of patterns irrespective of the laboratory in which the data were obtained.

  11. Assessing Fire Weather Index using statistical downscaling and spatial interpolation techniques in Greece

    NASA Astrophysics Data System (ADS)

    Karali, Anna; Giannakopoulos, Christos; Frias, Maria Dolores; Hatzaki, Maria; Roussos, Anargyros; Casanueva, Ana

    2013-04-01

    Forest fires have always been present in the Mediterranean ecosystems, thus they constitute a major ecological and socio-economic issue. The last few decades though, the number of forest fires has significantly increased, as well as their severity and impact on the environment. Local fire danger projections are often required when dealing with wild fire research. In the present study the application of statistical downscaling and spatial interpolation methods was performed to the Canadian Fire Weather Index (FWI), in order to assess forest fire risk in Greece. The FWI is used worldwide (including the Mediterranean basin) to estimate the fire danger in a generalized fuel type, based solely on weather observations. The meteorological inputs to the FWI System are noon values of dry-bulb temperature, air relative humidity, 10m wind speed and precipitation during the previous 24 hours. The statistical downscaling methods are based on a statistical model that takes into account empirical relationships between large scale variables (used as predictors) and local scale variables. In the framework of the current study the statistical downscaling portal developed by the Santander Meteorology Group (https://www.meteo.unican.es/downscaling) in the framework of the EU project CLIMRUN (www.climrun.eu) was used to downscale non standard parameters related to forest fire risk. In this study, two different approaches were adopted. Firstly, the analogue downscaling technique was directly performed to the FWI index values and secondly the same downscaling technique was performed indirectly through the meteorological inputs of the index. In both cases, the statistical downscaling portal was used considering the ERA-Interim reanalysis as predictands due to the lack of observations at noon. Additionally, a three-dimensional (3D) interpolation method of position and elevation, based on Thin Plate Splines (TPS) was used, to interpolate the ERA-Interim data used to calculate the index. Results from this method were compared with the statistical downscaling results obtained from the portal. Finally, FWI was computed using weather observations obtained from the Hellenic National Meteorological Service, mainly in the south continental part of Greece and a comparison with the previous results was performed.

  12. Global/local stress analysis of composite panels

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Knight, Norman F., Jr.

    1989-01-01

    A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  13. Microsoft C#.NET program and electromagnetic depth sounding for large loop source

    NASA Astrophysics Data System (ADS)

    Prabhakar Rao, K.; Ashok Babu, G.

    2009-07-01

    A program, in the C# (C Sharp) language with Microsoft.NET Framework, is developed to compute the normalized vertical magnetic field of a horizontal rectangular loop source placed on the surface of an n-layered earth. The field can be calculated either inside or outside the loop. Five C# classes with member functions in each class are, designed to compute the kernel, Hankel transform integral, coefficients for cubic spline interpolation between computed values and the normalized vertical magnetic field. The program computes the vertical magnetic field in the frequency domain using the integral expressions evaluated by a combination of straightforward numerical integration and the digital filter technique. The code utilizes different object-oriented programming (OOP) features. It finally computes the amplitude and phase of the normalized vertical magnetic field. The computed results are presented for geometric and parametric soundings. The code is developed in Microsoft.NET visual studio 2003 and uses various system class libraries.

  14. Color visualization for fluid flow prediction

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Speray, D. E.

    1982-01-01

    High-resolution raster scan color graphics allow variables to be presented as a continuum, in a color-coded picture that is referenced to a geometry such as a flow field grid or a boundary surface. Software is used to map a scalar variable such as pressure or temperature, defined on a two-dimensional slice of a flow field. The geometric shape is preserved in the resulting picture, and the relative magnitude of the variable is color-coded onto the geometric shape. The primary numerical process for color coding is an efficient search along a raster scan line to locate the quadrilteral block in the grid that bounds each pixel on the line. Tension spline interpolation is performed relative to the grid for specific values of the scalar variable, which is then color coded. When all pixels for the field of view are color-defined, a picture is played back from a memory device onto a television screen.

  15. Global/local stress analysis of composite structures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    1989-01-01

    A method for performing a global/local stress analysis is described and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  16. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  17. Study regarding the spline interpolation accuracy of the experimentally acquired data

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Danisor, Alin; Tamas, Razvan

    2016-12-01

    Experimental data processing is an issue that must be solved in almost all the domains of science. In engineering we usually have a large amount of data and we try to extract the useful signal which is relevant for the phenomenon under investigation. The criteria used to consider some points more relevant then some others may take into consideration various conditions which may be either phenomenon dependent, or general. The paper presents some of the ideas and tests regarding the identification of the best set of criteria used to filter the initial set of points in order to extract a subset which best fits the approximated function. If the function has regions where it is either constant, or it has a slow variation, fewer discretization points may be used. This means to create a simpler solution to process the experimental data, keeping the accuracy in some fair good limits.

  18. Application of an improved Nelson-Nguyen analysis to eccentric, arbitrary profile liquid annular seals

    NASA Technical Reports Server (NTRS)

    Padavala, Satyasrinivas; Palazzolo, Alan B.; Vallely, Pat; Ryan, Steve

    1994-01-01

    An improved dynamic analysis for liquid annular seals with arbitrary profile based on a method, first proposed by Nelson and Nguyen, is presented. An improved first order solution that incorporates a continuous interpolation of perturbed quantities in the circumferential direction, is presented. The original method uses an approximation scheme for circumferential gradients, based on Fast Fourier Transforms (FFT). A simpler scheme based on cubic splines is found to be computationally more efficient with better convergence at higher eccentricities. A new approach of computing dynamic coefficients based on external specified load is introduced. This improved analysis is extended to account for arbitrarily varying seal profile in both axial and circumferential directions. An example case of an elliptical seal with varying degrees of axial curvature is analyzed. A case study based on actual operating clearances of an interstage seal of the Space Shuttle Main Engine High Pressure Oxygen Turbopump is presented.

  19. Single-shot three-dimensional reconstruction based on structured light line pattern

    NASA Astrophysics Data System (ADS)

    Wang, ZhenZhou; Yang, YongMing

    2018-07-01

    Reconstruction of the object by single-shot is of great importance in many applications, in which the object is moving or its shape is non-rigid and changes irregularly. In this paper, we propose a single-shot structured light 3D imaging technique that calculates the phase map from the distorted line pattern. This technique makes use of the image processing techniques to segment and cluster the projected structured light line pattern from one single captured image. The coordinates of the clustered lines are extracted to form a low-resolution phase matrix which is then transformed to full-resolution phase map by spline interpolation. The 3D shape of the object is computed from the full-resolution phase map and the 2D camera coordinates. Experimental results show that the proposed method was able to reconstruct the three-dimensional shape of the object robustly from one single image.

  20. On the dynamics of jellyfish locomotion via 3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Piper, Matthew; Kim, Jin-Tae; Chamorro, Leonardo P.

    2016-11-01

    The dynamics of jellyfish (Aurelia aurita) locomotion is experimentally studied via 3D particle tracking velocimetry. 3D locations of the bell tip are tracked over 1.5 cycles to describe the jellyfish path. Multiple positions of the jellyfish bell margin are initially tracked in 2D from four independent planes and individually projected in 3D based on the jellyfish path and geometrical properties of the setup. A cubic spline interpolation and the exponentially weighted moving average are used to estimate derived quantities, including velocity and acceleration of the jellyfish locomotion. We will discuss distinctive features of the jellyfish 3D motion at various swimming phases, and will provide insight on the 3D contraction and relaxation in terms of the locomotion, the steadiness of the bell margin eccentricity, and local Reynolds number based on the instantaneous mean diameter of the bell.

  1. Model-independent partial wave analysis using a massively-parallel fitting framework

    NASA Astrophysics Data System (ADS)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  2. Split spline screw

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.

  3. Estimating suspended sediment load with multivariate adaptive regression spline, teaching-learning based optimization, and artificial bee colony models.

    PubMed

    Yilmaz, Banu; Aras, Egemen; Nacar, Sinan; Kankal, Murat

    2018-05-23

    The functional life of a dam is often determined by the rate of sediment delivery to its reservoir. Therefore, an accurate estimate of the sediment load in rivers with dams is essential for designing and predicting a dam's useful lifespan. The most credible method is direct measurements of sediment input, but this can be very costly and it cannot always be implemented at all gauging stations. In this study, we tested various regression models to estimate suspended sediment load (SSL) at two gauging stations on the Çoruh River in Turkey, including artificial bee colony (ABC), teaching-learning-based optimization algorithm (TLBO), and multivariate adaptive regression splines (MARS). These models were also compared with one another and with classical regression analyses (CRA). Streamflow values and previously collected data of SSL were used as model inputs with predicted SSL data as output. Two different training and testing dataset configurations were used to reinforce the model accuracy. For the MARS method, the root mean square error value was found to range between 35% and 39% for the test two gauging stations, which was lower than errors for other models. Error values were even lower (7% to 15%) using another dataset. Our results indicate that simultaneous measurements of streamflow with SSL provide the most effective parameter for obtaining accurate predictive models and that MARS is the most accurate model for predicting SSL. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Modelling soil sodium and potassium adsorption ratio (SPAR) in the immediate period after a grassland fire in Lithuania.

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Cerda, Artemi; Misiūnė, Ieva

    2015-04-01

    The soil sodium and potassium adsorption ratio (SPAR) is an index that measures the amount of sodium and potassium adsorbed onto clay and organic matter surfaces, in relation to calcium and magnesium. Assess the potential of soil dispersion or flocculation, a process which has implication in soil hydraulic properties and erosion (Sarah, 2004). Depending on severity and the type of ash produced, fire can changes in the immediate period the soil nutrient status (Bodi et al. 2014). Ash releases onto soil surface a large amount of cations, due the high pH. Previous works showed that SPAR from ash slurries is higher than solutions produced from litter (Pereira et al., 2014a). Normally the spatial distribution of topsoil nutrients in the immediate period after the fire is very heterogeneous, due to the different impacts of fire. Thus it is important to identify the most accurate interpolation method in order to identify with better precision the impacts of fire on soil properties. The objective of this work is to test several interpolation methods. The study area is located in near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. Four days after the fire it was designed a plot in a burned area with near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. Twenty five samples were collected from the topsoil. The SPAR index was calculated according to the formula: (Na++K+)/(Ca2++Mg2+)1/2 (Sarah, 2004). Data followed the normal distribution, thus no transformation was required previous to data modelling. Several well know interpolation models were tested, as Inverse Distance to a Weight (IDW) with the power of 1, 2, 3 and 4, Radial Basis Functions (RBF), Inverse Multiquadratic (IMT), Multilog (MTG), Multiquadratic (MTQ), Natural Cubic Spline (NCS) and Thin Plate Spline (TPS) and Local Polynomial (LP) with the power of 1 and 2 and Ordinary Kriging. The best interpolator was the one which had the lowest Root Mean Square Error (RMSE) (Pereira et al., 2014b). The results showed that on average, SPAR index was 0.85, with a minimum of 0.18, a maximum of 1.55, a standard deviation of 0.38 and a coefficient of variation of 44.70%. No previous works were carried out on fire-affected soils, however comparing it to ash slurries obtained from previous works (Pereira et al., 2014a), the values were higher. Among all the interpolation methods tested, the most accurate was IDW 1 (RMSE=0.393), and the less precise NCS (RMSE=0.542). This shows that data distribution is highly variable in space, since IDW methods are better interpolators for data irregularly distributed. The high spatial variability distribution of SPAR is very likely to affect soil hydraulic properties and plant recuperation in the immediate period after the fire. More research is needed to identify the SPAR spatio-temporal impacts of fire on soil. Acknowledgments POSTFIRE (Soil quality, erosion control and plant cover recovery under different post-fire management scenarios, CGL2013-47862-C2-1-R), funded by the Spanish Ministry of Economy and Competitiveness; Fuegored; RECARE (Preventing and Remediating Degradation of Soils in Europe Through Land Care, FP7-ENV-2013-TWO STAGE), funded by the European Commission; and for the COST action ES1306 (Connecting European connectivity research). References Bodi, M., Martin, D.A., Santin, C., Balfour, V., Doerr, S.H., Pereira, P., Cerda, A., Mataix-Solera, J. (2014) Wildland fire ash: production, composition and eco-hyro-geomorphic effects. Earth-Science Reviews, 130, 103-127. Pereira, P., Úbeda, X., Martin, D., Mataix-Solera, J., Cerdà, A., Burguet, M. (2014a) Wildfire effects on extractable elements in ash from Pinus Pinaster forest in Portugal. Hydrological Processes, 28, 3681-3690. Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J. Arcenegui, V., Zavala, L. (2014) Modelling the impacts of wildfire on ash thickness in the immediate period after the fire. Land Degradation and Development. DOI: 10.1002/ldr.2195 Sarah, P. (2004) Soil sodium and potassium adsorption ratio along a Mediterranean-arid transect. Journal of Arid Environments, 59, 731-741.

  5. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  6. Integrated GIS and multivariate statistical analysis for regional scale assessment of heavy metal soil contamination: A critical review.

    PubMed

    Hou, Deyi; O'Connor, David; Nathanail, Paul; Tian, Li; Ma, Yan

    2017-12-01

    Heavy metal soil contamination is associated with potential toxicity to humans or ecotoxicity. Scholars have increasingly used a combination of geographical information science (GIS) with geostatistical and multivariate statistical analysis techniques to examine the spatial distribution of heavy metals in soils at a regional scale. A review of such studies showed that most soil sampling programs were based on grid patterns and composite sampling methodologies. Many programs intended to characterize various soil types and land use types. The most often used sampling depth intervals were 0-0.10 m, or 0-0.20 m, below surface; and the sampling densities used ranged from 0.0004 to 6.1 samples per km 2 , with a median of 0.4 samples per km 2 . The most widely used spatial interpolators were inverse distance weighted interpolation and ordinary kriging; and the most often used multivariate statistical analysis techniques were principal component analysis and cluster analysis. The review also identified several determining and correlating factors in heavy metal distribution in soils, including soil type, soil pH, soil organic matter, land use type, Fe, Al, and heavy metal concentrations. The major natural and anthropogenic sources of heavy metals were found to derive from lithogenic origin, roadway and transportation, atmospheric deposition, wastewater and runoff from industrial and mining facilities, fertilizer application, livestock manure, and sewage sludge. This review argues that the full potential of integrated GIS and multivariate statistical analysis for assessing heavy metal distribution in soils on a regional scale has not yet been fully realized. It is proposed that future research be conducted to map multivariate results in GIS to pinpoint specific anthropogenic sources, to analyze temporal trends in addition to spatial patterns, to optimize modeling parameters, and to expand the use of different multivariate analysis tools beyond principal component analysis (PCA) and cluster analysis (CA). Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data

    NASA Astrophysics Data System (ADS)

    Luo, Zhen

    In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.

  8. Using a latent variable model with non-constant factor loadings to examine PM2.5 constituents related to secondary inorganic aerosols.

    PubMed

    Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N

    2016-04-01

    Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.

  9. A pseudoinverse deformation vector field generator and its applications

    PubMed Central

    Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.

    2010-01-01

    Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247

  10. Three-dimensional prediction of soil physical, chemical, and hydrological properties in a forested catchment of the Santa Catalina CZO

    NASA Astrophysics Data System (ADS)

    Shepard, C.; Holleran, M.; Lybrand, R. A.; Rasmussen, C.

    2014-12-01

    Understanding critical zone evolution and function requires an accurate assessment of local soil properties. Two-dimensional (2D) digital soil mapping provides a general assessment of soil characteristics across a sampled landscape, but lacks the ability to predict soil properties with depth. The utilization of mass-preserving spline functions enable the extrapolation of soil properties with depth, extending predictive functions to three-dimensions (3D). The present study was completed in the Marshall Gulch (MG) catchment, located in the Santa Catalina Mountains, 30 km northwest of Tucson, Arizona, as part of the Santa Catalina-Jemez Mountains Critical Zone Observatory. Twenty-four soil pits were excavated and described following standard procedures. Mass-preserving splines were used to extrapolate mass carbon (kg C m-2); percent clay, silt, and sand (%); sodium mass flux (kg Na m-2); and pH for 24 sampled soil pits in 1-cm depth increments. Saturated volumetric water content (θs) and volumetric water content at 10 kPa (θ10) were predicted using ROSETTA and established empirical relationships. The described profiles were all sampled to differing depths; to compensate for the unevenness of the profile descriptions, the soil depths were standardized from 0.0 to 1.0 and then split into five equal standard depth sections. A logit-transformation was used to normalize the target variables. Step-wise regressions were calculated using available environmental covariates to predict the properties of each variable across the catchment in each depth section, and interpolated model residuals added back to the predicted layers to generate the final soil maps. Logit-transformed R2 for the predictive functions varied widely, ranging from 0.20 to 0.79, with logit-transformed RMSE ranging from 0.15 to 2.77. The MG catchment was further classified into clusters with similar properties based on the environmental covariates, and representative depth functions for each target variable in each cluster calculated. Mass-preserving splines combined with stepwise regressions are an effective tool for predicting soil physical, chemical, and hydrological properties with depth, enhancing our understanding of the critical zone.

  11. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    NASA Astrophysics Data System (ADS)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  12. An interoperable standard system for the automatic generation and publication of the fire risk maps based on Fire Weather Index (FWI)

    NASA Astrophysics Data System (ADS)

    Julià Selvas, Núria; Ninyerola Casals, Miquel

    2015-04-01

    It has been implemented an automatic system to predict the fire risk in the Principality of Andorra, a small country located in the eastern Pyrenees mountain range, bordered by Catalonia and France, due to its location, his landscape is a set of a rugged mountains with an average elevation around 2000 meters. The system is based on the Fire Weather Index (FWI) that consists on different components, each one, measuring a different aspect of the fire danger calculated by the values of the weather variables at midday. CENMA (Centre d'Estudis de la Neu i de la Muntanya d'Andorra) has a network around 10 automatic meteorological stations, located in different places, peeks and valleys, that measure weather data like relative humidity, wind direction and speed, surface temperature, rainfall and snow cover every ten minutes; this data is sent daily and automatically to the system implemented that will be processed in the way to filter incorrect measurements and to homogenizer measurement units. Then this data is used to calculate all components of the FWI at midday and for the level of each station, creating a database with the values of the homogeneous measurements and the FWI components for each weather station. In order to extend and model this data to all Andorran territory and to obtain a continuous map, an interpolation method based on a multiple regression with spline residual interpolation has been implemented. This interpolation considerer the FWI data as well as other relevant predictors such as latitude, altitude, global solar radiation and sea distance. The obtained values (maps) are validated using a cross-validation leave-one-out method. The discrete and continuous maps are rendered in tiled raster maps and published in a web portal conform to Web Map Service (WMS) Open Geospatial Consortium (OGC) standard. Metadata and other reference maps (fuel maps, topographic maps, etc) are also available from this geoportal.

  13. Automatic stent strut detection in intravascular OCT images using image processing and classification technique

    NASA Astrophysics Data System (ADS)

    Lu, Hong; Gargesha, Madhusudhana; Wang, Zhao; Chamie, Daniel; Attizani, Guilherme F.; Kanaya, Tomoaki; Ray, Soumya; Costa, Marco A.; Rollins, Andrew M.; Bezerra, Hiram G.; Wilson, David L.

    2013-02-01

    Intravascular OCT (iOCT) is an imaging modality with ideal resolution and contrast to provide accurate in vivo assessments of tissue healing following stent implantation. Our Cardiovascular Imaging Core Laboratory has served >20 international stent clinical trials with >2000 stents analyzed. Each stent requires 6-16hrs of manual analysis time and we are developing highly automated software to reduce this extreme effort. Using classification technique, physically meaningful image features, forward feature selection to limit overtraining, and leave-one-stent-out cross validation, we detected stent struts. To determine tissue coverage areas, we estimated stent "contours" by fitting detected struts and interpolation points from linearly interpolated tissue depths to a periodic cubic spline. Tissue coverage area was obtained by subtracting lumen area from the stent area. Detection was compared against manual analysis of 40 pullbacks. We obtained recall = 90+/-3% and precision = 89+/-6%. When taking struts deemed not bright enough for manual analysis into consideration, precision improved to 94+/-6%. This approached inter-observer variability (recall = 93%, precision = 96%). Differences in stent and tissue coverage areas are 0.12 +/- 0.41 mm2 and 0.09 +/- 0.42 mm2, respectively. We are developing software which will enable visualization, review, and editing of automated results, so as to provide a comprehensive stent analysis package. This should enable better and cheaper stent clinical trials, so that manufacturers can optimize the myriad of parameters (drug, coverage, bioresorbable versus metal, etc.) for stent design.

  14. Numerical study of time domain analogy applied to noise prediction from rotating blades

    NASA Astrophysics Data System (ADS)

    Fedala, D.; Kouidri, S.; Rey, R.

    2009-04-01

    Aeroacoustic formulations in time domain are frequently used to model the aerodynamic sound of airfoils, the time data being more accessible. The formulation 1A developed by Farassat, an integral solution of the Ffowcs Williams and Hawkings equation, holds great interest because of its ability to handle surfaces in arbitrary motion. The aim of this work is to study the numerical sensitivity of this model to specified parameters used in the calculation. The numerical algorithms, spatial and time discretizations, and approximations used for far-field acoustic simulation are presented. An approach of quantifying of the numerical errors resulting from implementation of formulation 1A is carried out based on Isom's and Tam's test cases. A helicopter blade airfoil, as defined by Farassat to investigate Isom's case, is used in this work. According to Isom, the acoustic response of a dipole source with a constant aerodynamic load, ρ0c02, is equal to the thickness noise contribution. Discrepancies are observed when the two contributions are computed numerically. In this work, variations of these errors, which depend on the temporal resolution, Mach number, source-observer distance, and interpolation algorithm type, are investigated. The results show that the spline interpolating algorithm gives the minimum error. The analysis is then extended to Tam's test case. Tam's test case has the advantage of providing an analytical solution for the first harmonic of the noise produced by a specific force distribution.

  15. More insights into early brain development through statistical analyses of eigen-structural elements of diffusion tensor imaging using multivariate adaptive regression splines

    PubMed Central

    Chen, Yasheng; Zhu, Hongtu; An, Hongyu; Armao, Diane; Shen, Dinggang; Gilmore, John H.; Lin, Weili

    2013-01-01

    The aim of this study was to characterize the maturational changes of the three eigenvalues (λ1 ≥ λ2 ≥ λ3) of diffusion tensor imaging (DTI) during early postnatal life for more insights into early brain development. In order to overcome the limitations of using presumed growth trajectories for regression analysis, we employed Multivariate Adaptive Regression Splines (MARS) to derive data-driven growth trajectories for the three eigenvalues. We further employed Generalized Estimating Equations (GEE) to carry out statistical inferences on the growth trajectories obtained with MARS. With a total of 71 longitudinal datasets acquired from 29 healthy, full-term pediatric subjects, we found that the growth velocities of the three eigenvalues were highly correlated, but significantly different from each other. This paradox suggested the existence of mechanisms coordinating the maturations of the three eigenvalues even though different physiological origins may be responsible for their temporal evolutions. Furthermore, our results revealed the limitations of using the average of λ2 and λ3 as the radial diffusivity in interpreting DTI findings during early brain development because these two eigenvalues had significantly different growth velocities even in central white matter. In addition, based upon the three eigenvalues, we have documented the growth trajectory differences between central and peripheral white matter, between anterior and posterior limbs of internal capsule, and between inferior and superior longitudinal fasciculus. Taken together, we have demonstrated that more insights into early brain maturation can be gained through analyzing eigen-structural elements of DTI. PMID:23455648

  16. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  17. DEFINITION OF MULTIVARIATE GEOCHEMICAL ASSOCIATIONS WITH POLYMETALLIC MINERAL OCCURRENCES USING A SPATIALLY DEPENDENT CLUSTERING TECHNIQUE AND RASTERIZED STREAM SEDIMENT DATA - AN ALASKAN EXAMPLE.

    USGS Publications Warehouse

    Jenson, Susan K.; Trautwein, C.M.

    1984-01-01

    The application of an unsupervised, spatially dependent clustering technique (AMOEBA) to interpolated raster arrays of stream sediment data has been found to provide useful multivariate geochemical associations for modeling regional polymetallic resource potential. The technique is based on three assumptions regarding the compositional and spatial relationships of stream sediment data and their regional significance. These assumptions are: (1) compositionally separable classes exist and can be statistically distinguished; (2) the classification of multivariate data should minimize the pair probability of misclustering to establish useful compositional associations; and (3) a compositionally defined class represented by three or more contiguous cells within an array is a more important descriptor of a terrane than a class represented by spatial outliers.

  18. The extension of the parametrization of the radio source coordinates in geodetic VLBI and its impact on the time series analysis

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2017-07-01

    The radio sources within the most recent celestial reference frame (CRF) catalog ICRF2 are represented by a single, time-invariant coordinate pair. The datum sources were chosen mainly according to certain statistical properties of their position time series. Yet, such statistics are not applicable unconditionally, and also ambiguous. However, ignoring systematics in the source positions of the datum sources inevitably leads to a degradation of the quality of the frame and, therefore, also of the derived quantities such as the Earth orientation parameters. One possible approach to overcome these deficiencies is to extend the parametrization of the source positions, similarly to what is done for the station positions. We decided to use the multivariate adaptive regression splines algorithm to parametrize the source coordinates. It allows a great deal of automation, by combining recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and, thus, the best number of polynomial pieces to fit the data autonomously. With that we can correct the ICRF2 a priori coordinates for our analysis and eliminate the systematics in the position estimates. This allows us to introduce also special handling sources into the datum definition, leading to on average 30 % more sources in the datum. We find that not only the CPO can be improved by more than 10 % due to the improved geometry, but also the station positions, especially in the early years of VLBI, can benefit greatly.

  19. Geometric and computer-aided spline hob modeling

    NASA Astrophysics Data System (ADS)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  20. Design of an essentially non-oscillatory reconstruction procedure on finite-element type meshes

    NASA Technical Reports Server (NTRS)

    Abgrall, R.

    1991-01-01

    An essentially non-oscillatory reconstruction for functions defined on finite-element type meshes was designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitrary meshes and the reconstruction of a function from its average in the control volumes surrounding the nodes of the mesh. Concerning the first problem, we have studied the behavior of the highest coefficients of the Lagrange interpolation function which may admit discontinuities of locally regular curves. This enables us to choose the best stencil for the interpolation. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, because of the very nature of the mesh, the only method that may work is the so called reconstruction via deconvolution method. Unfortunately, it is well suited only for regular meshes as we show, but we also show how to overcome this difficulty. The global method has the expected order of accuracy but is conservative up to a high order quadrature formula only. Some numerical examples are given which demonstrate the efficiency of the method.

  1. A Data-Driven, Integrated Flare Model Based on Self-Organized Criticality

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M.

    2013-09-01

    We interpret solar flares as events originating in solar active regions having reached the self-organized critical state, by alternatively using two versions of an "integrated flare model" - one static and one dynamic. In both versions the initial conditions are derived from observations aiming to investigate whether well-known scaling laws observed in the distribution functions of characteristic flare parameters are reproduced after the self-organized critical state has been reached. In the static model, we first apply a nonlinear force-free extrapolation that reconstructs the three-dimensional magnetic fields from two-dimensional vector magnetograms. We then locate magnetic discontinuities exceeding a threshold in the Laplacian of the magnetic field. These discontinuities are relaxed in local diffusion events, implemented in the form of cellular-automaton evolution rules. Subsequent loading and relaxation steps lead the system to self-organized criticality, after which the statistical properties of the simulated events are examined. In the dynamic version we deploy an enhanced driving mechanism, which utilizes the observed evolution of active regions, making use of sequential vector magnetograms. We first apply the static cellular automaton model to consecutive solar vector magnetograms until the self-organized critical state is reached. We then evolve the magnetic field inbetween these processed snapshots through spline interpolation, acting as a natural driver in the dynamic model. The identification of magnetically unstable sites as well as their relaxation follow the same rules as in the static model after each interpolation step. Subsequent interpolation/driving and relaxation steps cover all transitions until the end of the sequence. Physical requirements, such as the divergence-free condition for the magnetic field vector, are approximately satisfied in both versions of the model. We obtain robust power laws in the distribution functions of the modelled flaring events with scaling indices in good agreement with observations. We therefore conclude that well-known statistical properties of flares are reproduced after active regions reach self-organized criticality. The significant enhancement in both the static and the dynamic integrated flare models is that they initiate the simulation from observations, thus facilitating energy calculation in physical units. Especially in the dynamic version of the model, the driving of the system is based on observed, evolving vector magnetograms, allowing for the separation between MHD and kinetic timescales through the assignment of distinct MHD timestamps to each interpolation step.

  2. Rational-Spline Subroutines

    NASA Technical Reports Server (NTRS)

    Schiess, James R.; Kerr, Patricia A.; Smith, Olivia C.

    1988-01-01

    Smooth curves drawn among plotted data easily. Rational-Spline Approximation with Automatic Tension Adjustment algorithm leads to flexible, smooth representation of experimental data. "Tension" denotes mathematical analog of mechanical tension in spline or other mechanical curve-fitting tool, and "spline" as denotes mathematical generalization of tool. Program differs from usual spline under tension, allows user to specify different values of tension between adjacent pairs of knots rather than constant tension over entire range of data. Subroutines use automatic adjustment scheme that varies tension parameter for each interval until maximum deviation of spline from line joining knots less than or equal to amount specified by user. Procedure frees user from drudgery of adjusting individual tension parameters while still giving control over local behavior of spline.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, S.

    This report describes the use of several subroutines from the CORLIB core mathematical subroutine library for the solution of a model fluid flow problem. The model consists of the Euler partial differential equations. The equations are spatially discretized using the method of pseudo-characteristics. The resulting system of ordinary differential equations is then integrated using the method of lines. The stiff ordinary differential equation solver LSODE (2) from CORLIB is used to perform the time integration. The non-stiff solver ODE (4) is used to perform a related integration. The linear equation solver subroutines DECOMP and SOLVE are used to solve linearmore » systems whose solutions are required in the calculation of the time derivatives. The monotone cubic spline interpolation subroutines PCHIM and PCHFE are used to approximate water properties. The report describes the use of each of these subroutines in detail. It illustrates the manner in which modules from a standard mathematical software library such as CORLIB can be used as building blocks in the solution of complex problems of practical interest. 9 refs., 2 figs., 4 tabs.« less

  4. Measurements of the optical properties of tissue in conjunction with photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Nilsson, Annika M. K.; Berg, Roger; Andersson-Engels, Stefan

    1995-07-01

    A simple optical dosimeter was used to measure the light intensity in rat liver and muscle in vivo with fibers positioned at different depths to investigate whether the light penetration changed during photodynamic therapy (PDT). The results were then correlated with measurements of the three optical-interaction coefficients mu s, mu a, and g for wavelengths in the range 500-800 nm for PDT-treated and nontreated rat liver and muscle tissue in vitro. A distinct increase in the absorption coefficient was seen immediately after treatment, in agreement with the decreasing light intensity observed during the treatment, as measured with the optical dosimeter. The collimated transmittance was measured with a narrow-beam setup, and an optical integrating sphere was used to measure the diffuse reflectance and total transmittance of the samples. The corresponding optical properties were obtained by spline interpolation of Monte Carlo-simulated data. To ensure that the measured values were correct, we performed calibration measurements with suspensions of polystyrene microspheres and ink.

  5. Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model

    NASA Astrophysics Data System (ADS)

    Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose

    1999-01-01

    This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.

  6. Simulation the Effect of Internal Wave on the Acoustic Propagation

    NASA Astrophysics Data System (ADS)

    Ko, D. S.

    2005-05-01

    An acoustic radiation transport model with the Monte Carlo solution has been developed and applied to study the effect of internal wave induced random oceanic fluctuations on the deep ocean acoustic propagation. Refraction in the ocean sound channel is performed by means of bi-cubic spline interpolation of discrete deterministic ray paths in the angle(energy)-range-depth coordinates. Scattering by random internal wave fluctuations is accomplished by sampling a power law scattering kernel applying the rejection method. Results from numerical experiments show that the mean positions of acoustic rays are significantly displaced tending toward the sound channel axis due to the asymmetry of the scattering kernel. The spreading of ray depths and angles about the means depends strongly on frequency. The envelope of the ray displacement spreading is found to be proportional to the square root of range which is different from "3/2 law" found in the non-channel case. Suppression of the spreading is due to the anisotropy of fluctuations and especially due to the presence of sound channel itself.

  7. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  8. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Guerdal, Z.

    1992-01-01

    An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schalkoff, R.J.

    This report summarizes work after 4 years of a 3-year project (no-cost extension of the above-referenced project for a period of 12 months granted). The fourth generation of a vision sensing head for geometric and photometric scene sensing has been built and tested. Estimation algorithms for automatic sensor calibration updating under robot motion have been developed and tested. We have modified the geometry extraction component of the rendering pipeline. Laser scanning now produces highly accurate points on segmented curves. These point-curves are input to a NURBS (non-uniform rational B-spline) skinning procedure to produce interpolating surface segments. The NURBS formulation includesmore » quadrics as a sub-class, thus this formulation allows much greater flexibility without the attendant instability of generating an entire quadric surface. We have also implemented correction for diffuse lighting and specular effects. The QRobot joint level control was extended to a complete semi-autonomous robot control system for D and D operations. The imaging and VR subsystems have been integrated and tested.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afzal, Muhammad U., E-mail: muhammad.afzal@mq.edu.au; Esselle, Karu P.

    This paper presents a quasi-analytical technique to design a continuous, all-dielectric phase correcting structures (PCSs) for circularly polarized Fabry-Perot resonator antennas (FPRAs). The PCS has been realized by varying the thickness of a rotationally symmetric dielectric block placed above the antenna. A global analytical expression is derived for the PCS thickness profile, which is required to achieve nearly uniform phase distribution at the output of the PCS, despite the non-uniform phase distribution at its input. An alternative piecewise technique based on spline interpolation is also explored to design a PCS. It is shown from both far- and near-field results thatmore » a PCS tremendously improves the radiation performance of the FPRA. These improvements include an increase in peak directivity from 22 to 120 (from 13.4 dBic to 20.8 dBic) and a decrease of 3 dB beamwidth from 41.5° to 15°. The phase-corrected antenna also has a good directivity bandwidth of 1.3 GHz, which is 11% of the center frequency.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zainudin, Mohd Lutfi, E-mail: mdlutfi07@gmail.com; Institut Matematik Kejuruteraan; Saaban, Azizan, E-mail: azizan.s@uum.edu.my

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputedmore » data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.« less

  12. Automated peroperative assessment of stents apposition from OCT pullbacks.

    PubMed

    Dubuisson, Florian; Péry, Emilie; Ouchchane, Lemlih; Combaret, Nicolas; Kauffmann, Claude; Souteyrand, Géraud; Motreff, Pascal; Sarry, Laurent

    2015-04-01

    This study's aim was to control the stents apposition by automatically analyzing endovascular optical coherence tomography (OCT) sequences. Lumen is detected using threshold, morphological and gradient operators to run a Dijkstra algorithm. Wrong detection tagged by the user and caused by bifurcation, struts'presence, thrombotic lesions or dissections can be corrected using a morphing algorithm. Struts are also segmented by computing symmetrical and morphological operators. Euclidian distance between detected struts and wall artery initializes a stent's complete distance map and missing data are interpolated with thin-plate spline functions. Rejection of detected outliers, regularization of parameters by generalized cross-validation and using the one-side cyclic property of the map also optimize accuracy. Several indices computed from the map provide quantitative values of malapposition. Algorithm was run on four in-vivo OCT sequences including different incomplete stent apposition's cases. Comparison with manual expert measurements validates the segmentation׳s accuracy and shows an almost perfect concordance of automated results. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center

    PubMed Central

    Dou, Chao

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 PMID:28090205

  14. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center.

    PubMed

    Miao, Beibei; Dou, Chao; Jin, Xuebo

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 .

  15. Boosted Multivariate Trees for Longitudinal Data

    PubMed Central

    Pande, Amol; Li, Liang; Rajeswaran, Jeevanantham; Ehrlinger, John; Kogalur, Udaya B.; Blackstone, Eugene H.; Ishwaran, Hemant

    2017-01-01

    Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data. PMID:29249866

  16. An algorithm for surface smoothing with rational splines

    NASA Technical Reports Server (NTRS)

    Schiess, James R.

    1987-01-01

    Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.

  17. Numerical Methods Using B-Splines

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Merriam, Marshal (Technical Monitor)

    1997-01-01

    The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    In this project, we have developed techniques for visualizing large-scale time-varying multivariate particle and field data produced by the GPS_TTBP team. Our basic approach to particle data visualization is to provide the user with an intuitive interactive interface for exploring the data. We have designed a multivariate filtering interface for scientists to effortlessly isolate those particles of interest for revealing structures in densely packed particles as well as the temporal behaviors of selected particles. With such a visualization system, scientists on the GPS-TTBP project can validate known relationships and temporal trends, and possibly gain new insights in their simulations. Wemore » have tested the system using over several millions of particles on a single PC. We will also need to address the scalability of the system to handle billions of particles using a cluster of PCs. To visualize the field data, we choose to use direct volume rendering. Because the data provided by PPPL is on a curvilinear mesh, several processing steps have to be taken. The mesh is curvilinear in nature, following the shape of a deformed torus. Additionally, in order to properly interpolate between the given slices we cannot use simple linear interpolation in Cartesian space but instead have to interpolate along the magnetic field lines given to us by the scientists. With these limitations, building a system that can provide an accurate visualization of the dataset is quite a challenge to overcome. In the end we use a combination of deformation methods such as deformation textures in order to fit a normal torus into their deformed torus, allowing us to store the data in toroidal coordinates in order to take advantage of modern GPUs to perform the interpolation along the field lines for us. The resulting new rendering capability produces visualizations at a quality and detail level previously not available to the scientists at the PPPL. In summary, in this project we have successfully created new capabilities for the scientists to visualize their 3D data at higher accuracy and quality, enhancing their ability to evaluate the simulations and understand the modeled phenomena.« less

  19. Age- and sex-specific thorax finite element model development and simulation.

    PubMed

    Schoell, Samantha L; Weaver, Ashley A; Vavalle, Nicholas A; Stitzel, Joel D

    2015-01-01

    The shape, size, bone density, and cortical thickness of the thoracic skeleton vary significantly with age and sex, which can affect the injury tolerance, especially in at-risk populations such as the elderly. Computational modeling has emerged as a powerful and versatile tool to assess injury risk. However, current computational models only represent certain ages and sexes in the population. The purpose of this study was to morph an existing finite element (FE) model of the thorax to depict thorax morphology for males and females of ages 30 and 70 years old (YO) and to investigate the effect on injury risk. Age- and sex-specific FE models were developed using thin-plate spline interpolation. In order to execute the thin-plate spline interpolation, homologous landmarks on the reference, target, and FE model are required. An image segmentation and registration algorithm was used to collect homologous rib and sternum landmark data from males and females aged 0-100 years. The Generalized Procrustes Analysis was applied to the homologous landmark data to quantify age- and sex-specific isolated shape changes in the thorax. The Global Human Body Models Consortium (GHBMC) 50th percentile male occupant model was morphed to create age- and sex-specific thoracic shape change models (scaled to a 50th percentile male size). To evaluate the thoracic response, 2 loading cases (frontal hub impact and lateral impact) were simulated to assess the importance of geometric and material property changes with age and sex. Due to the geometric and material property changes with age and sex, there were observed differences in the response of the thorax in both the frontal and lateral impacts. Material property changes alone had little to no effect on the maximum thoracic force or the maximum percent compression. With age, the thorax becomes stiffer due to superior rotation of the ribs, which can result in increased bone strain that can increase the risk of fracture. For the 70-YO models, the simulations predicted a higher number of rib fractures in comparison to the 30-YO models. The male models experienced more superior rotation of the ribs in comparison to the female models, which resulted in a higher number of rib fractures for the males. In this study, age- and sex-specific thoracic models were developed and the biomechanical response was studied using frontal and lateral impact simulations. The development of these age- and sex-specific FE models of the thorax will lead to an improved understanding of the complex relationship between thoracic geometry, age, sex, and injury risk.

  20. Development and application of a standardized flow measurement uncertainty analysis framework to various low-head short-converging intake types across the United States federal hydropower fleet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Brennan T

    2015-01-01

    Turbine discharges at low-head short converging intakes are difficult to measure accurately. The proximity of the measurement section to the intake entrance admits large uncertainties related to asymmetry of the velocity profile, swirl, and turbulence. Existing turbine performance codes [10, 24] do not address this special case and published literature is largely silent on rigorous evaluation of uncertainties associated with this measurement context. The American Society of Mechanical Engineers (ASME) Committee investigated the use of Acoustic transit time (ATT), Acoustic scintillation (AS), and Current meter (CM) in a short converging intake at the Kootenay Canal Generating Station in 2009. Basedmore » on their findings, a standardized uncertainty analysis (UA) framework for velocity-area method (specifically for CM measurements) is presented in this paper given the fact that CM is still the most fundamental and common type of measurement system. Typical sources of systematic and random errors associated with CM measurements are investigated, and the major sources of uncertainties associated with turbulence and velocity fluctuations, numerical velocity integration technique (bi-cubic spline), and the number and placement of current meters are being considered for an evaluation. Since the velocity measurements in a short converging intake are associated with complex nonlinear and time varying uncertainties (e.g., Reynolds stress in fluid dynamics), simply applying the law of propagation of uncertainty is known to overestimate the measurement variance while the Monte Carlo method does not. Therefore, a pseudo-Monte Carlo simulation method (random flow generation technique [8]) which was initially developed for the purpose of establishing upstream or initial conditions in the Large-Eddy Simulation (LES) and the Direct Numerical Simulation (DNS) is used to statistically determine uncertainties associated with turbulence and velocity fluctuations. This technique is then combined with a bi-cubic spline interpolation method which converts point velocities into a continuous velocity distribution over the measurement domain. Subsequently the number and placement of current meters are simulated to investigate the accuracy of the estimated flow rates using the numerical velocity-area integration method outlined in ISO 3354 [12]. The authors herein consider that statistics on generated flow rates processed with bi-cubic interpolation and sensor simulations are the combined uncertainties which already accounted for the effects of all those three uncertainty sources. A preliminary analysis based on the current meter data obtained through an upgrade acceptance test of a single unit located in a mainstem plant has been presented.« less

  1. Groundwater potential mapping using C5.0, random forest, and multivariate adaptive regression spline models in GIS.

    PubMed

    Golkarian, Ali; Naghibi, Seyed Amir; Kalantar, Bahareh; Pradhan, Biswajeet

    2018-02-17

    Ever increasing demand for water resources for different purposes makes it essential to have better understanding and knowledge about water resources. As known, groundwater resources are one of the main water resources especially in countries with arid climatic condition. Thus, this study seeks to provide groundwater potential maps (GPMs) employing new algorithms. Accordingly, this study aims to validate the performance of C5.0, random forest (RF), and multivariate adaptive regression splines (MARS) algorithms for generating GPMs in the eastern part of Mashhad Plain, Iran. For this purpose, a dataset was produced consisting of spring locations as indicator and groundwater-conditioning factors (GCFs) as input. In this research, 13 GCFs were selected including altitude, slope aspect, slope angle, plan curvature, profile curvature, topographic wetness index (TWI), slope length, distance from rivers and faults, rivers and faults density, land use, and lithology. The mentioned dataset was divided into two classes of training and validation with 70 and 30% of the springs, respectively. Then, C5.0, RF, and MARS algorithms were employed using R statistical software, and the final values were transformed into GPMs. Finally, two evaluation criteria including Kappa and area under receiver operating characteristics curve (AUC-ROC) were calculated. According to the findings of this research, MARS had the best performance with AUC-ROC of 84.2%, followed by RF and C5.0 algorithms with AUC-ROC values of 79.7 and 77.3%, respectively. The results indicated that AUC-ROC values for the employed models are more than 70% which shows their acceptable performance. As a conclusion, the produced methodology could be used in other geographical areas. GPMs could be used by water resource managers and related organizations to accelerate and facilitate water resource exploitation.

  2. Transport modeling and multivariate adaptive regression splines for evaluating performance of ASR systems in freshwater aquifers

    NASA Astrophysics Data System (ADS)

    Forghani, Ali; Peralta, Richard C.

    2017-10-01

    The study presents a procedure using solute transport and statistical models to evaluate the performance of aquifer storage and recovery (ASR) systems designed to earn additional water rights in freshwater aquifers. The recovery effectiveness (REN) index quantifies the performance of these ASR systems. REN is the proportion of the injected water that the same ASR well can recapture during subsequent extraction periods. To estimate REN for individual ASR wells, the presented procedure uses finely discretized groundwater flow and contaminant transport modeling. Then, the procedure uses multivariate adaptive regression splines (MARS) analysis to identify the significant variables affecting REN, and to identify the most recovery-effective wells. Achieving REN values close to 100% is the desire of the studied 14-well ASR system operator. This recovery is feasible for most of the ASR wells by extracting three times the injectate volume during the same year as injection. Most of the wells would achieve RENs below 75% if extracting merely the same volume as they injected. In other words, recovering almost all the same water molecules that are injected requires having a pre-existing water right to extract groundwater annually. MARS shows that REN most significantly correlates with groundwater flow velocity, or hydraulic conductivity and hydraulic gradient. MARS results also demonstrate that maximizing REN requires utilizing the wells located in areas with background Darcian groundwater velocities less than 0.03 m/d. The study also highlights the superiority of MARS over regular multiple linear regressions to identify the wells that can provide the maximum REN. This is the first reported application of MARS for evaluating performance of an ASR system in fresh water aquifers.

  3. Quantitative survival impact of composite treatment delays in head and neck cancer.

    PubMed

    Ho, Allen S; Kim, Sungjin; Tighiouart, Mourad; Mita, Alain; Scher, Kevin S; Epstein, Joel B; Laury, Anna; Prasad, Ravi; Ali, Nabilah; Patio, Chrysanta; St-Clair, Jon Mallen; Zumsteg, Zachary S

    2018-05-09

    Multidisciplinary management of head and neck cancer (HNC) must reconcile increasingly sophisticated subspecialty care with timeliness of care. Prior studies examined the individual effects of delays in diagnosis-to-treatment interval, postoperative interval, and radiation interval but did not consider them collectively. The objective of the current study was to investigate the combined impact of these interwoven intervals on patients with HNC. Patients with HNC who underwent curative-intent surgery with radiation were identified in the National Cancer Database between 2004 and 2013. Multivariable models were constructed using restricted cubic splines to determine nonlinear relations with overall survival. Overall, 15,064 patients were evaluated. After adjustment for covariates, only prolonged postoperative interval (P < .001) and radiation interval (P < .001) independently predicted for worse outcomes, whereas the association of diagnosis-to-treatment interval with survival disappeared. By using multivariable restricted cubic spline functions, increasing postoperative interval did not affect mortality until 40 days after surgery, and each day of delay beyond this increased the risk of mortality until 70 days after surgery (hazard ratio, 1.14; 95% confidence interval, 1.01-1.28; P = .029). For radiation interval, mortality escalated continuously with each additional day of delay, plateauing at 55 days (hazard ratio, 1.25; 95% confidence interval, 1.11-1.41; P < .001). Delays beyond these change points were not associated with further survival decrements. Increasing delays in postoperative and radiation intervals are associated independently with an escalating risk of mortality that plateaus beyond certain thresholds. Delays in initiating therapy, conversely, are eclipsed in importance when appraised in conjunction with the entire treatment course. Such findings may redirect focus to streamlining those intervals that are most sensitive to delays when considering survival burden. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.

  4. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-07-01

    Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.

  5. Estimation of soil cation exchange capacity using Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS)

    NASA Astrophysics Data System (ADS)

    Emamgolizadeh, S.; Bateni, S. M.; Shahsavani, D.; Ashrafi, T.; Ghorbani, H.

    2015-10-01

    The soil cation exchange capacity (CEC) is one of the main soil chemical properties, which is required in various fields such as environmental and agricultural engineering as well as soil science. In situ measurement of CEC is time consuming and costly. Hence, numerous studies have used traditional regression-based techniques to estimate CEC from more easily measurable soil parameters (e.g., soil texture, organic matter (OM), and pH). However, these models may not be able to adequately capture the complex and highly nonlinear relationship between CEC and its influential soil variables. In this study, Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS) were employed to estimate CEC from more readily measurable soil physical and chemical variables (e.g., OM, clay, and pH) by developing functional relations. The GEP- and MARS-based functional relations were tested at two field sites in Iran. Results showed that GEP and MARS can provide reliable estimates of CEC. Also, it was found that the MARS model (with root-mean-square-error (RMSE) of 0.318 Cmol+ kg-1 and correlation coefficient (R2) of 0.864) generated slightly better results than the GEP model (with RMSE of 0.270 Cmol+ kg-1 and R2 of 0.807). The performance of GEP and MARS models was compared with two existing approaches, namely artificial neural network (ANN) and multiple linear regression (MLR). The comparison indicated that MARS and GEP outperformed the MLP model, but they did not perform as good as ANN. Finally, a sensitivity analysis was conducted to determine the most and the least influential variables affecting CEC. It was found that OM and pH have the most and least significant effect on CEC, respectively.

  6. Spline screw payload fastening system

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.

  7. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  8. Recovery of sparse translation-invariant signals with continuous basis pursuit

    PubMed Central

    Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero

    2013-01-01

    We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562

  9. Optimal estimation of suspended-sediment concentrations in streams

    USGS Publications Warehouse

    Holtschlag, D.J.

    2001-01-01

    Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.

  10. A robust interpolation method for constructing digital elevation models from remote sensing data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Liu, Fengying; Li, Yanyan; Yan, Changqing; Liu, Guolin

    2016-09-01

    A digital elevation model (DEM) derived from remote sensing data often suffers from outliers due to various reasons such as the physical limitation of sensors and low contrast of terrain textures. In order to reduce the effect of outliers on DEM construction, a robust algorithm of multiquadric (MQ) methodology based on M-estimators (MQ-M) was proposed. MQ-M adopts an adaptive weight function with three-parts. The weight function is null for large errors, one for small errors and quadric for others. A mathematical surface was employed to comparatively analyze the robustness of MQ-M, and its performance was compared with those of the classical MQ and a recently developed robust MQ method based on least absolute deviation (MQ-L). Numerical tests show that MQ-M is comparative to the classical MQ and superior to MQ-L when sample points follow normal and Laplace distributions, and under the presence of outliers the former is more accurate than the latter. A real-world example of DEM construction using stereo images indicates that compared with the classical interpolation methods, such as natural neighbor (NN), ordinary kriging (OK), ANUDEM, MQ-L and MQ, MQ-M has a better ability of preserving subtle terrain features. MQ-M replaces thin plate spline for reference DEM construction to assess the contribution to our recently developed multiresolution hierarchical classification method (MHC). Classifying the 15 groups of benchmark datasets provided by the ISPRS Commission demonstrates that MQ-M-based MHC is more accurate than MQ-L-based and TPS-based MHCs. MQ-M has high potential for DEM construction.

  11. Quantum effects and anharmonicity in the H2-Li+-benzene complex: A model for hydrogen storage materials

    NASA Astrophysics Data System (ADS)

    Kolmann, Stephen J.; D'Arcy, Jordan H.; Jordan, Meredith J. T.

    2013-12-01

    Quantum and anharmonic effects are investigated in H2-Li+-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials. Three- and 8-dimensional quantum diffusion Monte Carlo (QDMC) and rigid-body diffusion Monte Carlo (RBDMC) simulations are performed on potential energy surfaces interpolated from electronic structure calculations at the M05-2X/6-31+G(d,p) and M05-2X/6-311+G(2df,p) levels of theory using a three-dimensional spline or a modified Shepard interpolation. These calculations investigate the intermolecular interactions in this system, with three- and 8-dimensional 0 K H2 binding enthalpy estimates, ΔHbind (0 K), being 16.5 kJ mol-1 and 12.4 kJ mol-1, respectively: 0.1 and 0.6 kJ mol-1 higher than harmonic values. Zero-point energy effects are 35% of the value of ΔHbind (0 K) at M05-2X/6-311+G(2df,p) and cannot be neglected; uncorrected electronic binding energies overestimate ΔHbind (0 K) by at least 6 kJ mol-1. Harmonic intermolecular binding enthalpies can be corrected by treating the H2 "helicopter" and "ferris wheel" rotations as free and hindered rotations, respectively. These simple corrections yield results within 2% of the 8-dimensional anharmonic calculations. Nuclear ground state probability density histograms obtained from the QDMC and RBDMC simulations indicate the H2 molecule is delocalized above the Li+-benzene system at 0 K.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolmann, Stephen J.; D'Arcy, Jordan H.; Jordan, Meredith J. T., E-mail: m.jordan@chem.usyd.edu.au

    Quantum and anharmonic effects are investigated in H{sub 2}-Li{sup +}-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials. Three- and 8-dimensional quantum diffusion Monte Carlo (QDMC) and rigid-body diffusion Monte Carlo (RBDMC) simulations are performed on potential energy surfaces interpolated from electronic structure calculations at the M05-2X/6-31+G(d,p) and M05-2X/6-311+G(2df,p) levels of theory using a three-dimensional spline or a modified Shepard interpolation. These calculations investigate the intermolecular interactions in this system, with three- and 8-dimensional 0 K H{sub 2} binding enthalpy estimates, ΔH{sub bind} (0 K), being 16.5 kJ mol{sup −1} and 12.4 kJ mol{sup −1}, respectively: 0.1 and 0.6more » kJ mol{sup −1} higher than harmonic values. Zero-point energy effects are 35% of the value of ΔH{sub bind} (0 K) at M05-2X/6-311+G(2df,p) and cannot be neglected; uncorrected electronic binding energies overestimate ΔH{sub bind} (0 K) by at least 6 kJ mol{sup −1}. Harmonic intermolecular binding enthalpies can be corrected by treating the H{sub 2} “helicopter” and “ferris wheel” rotations as free and hindered rotations, respectively. These simple corrections yield results within 2% of the 8-dimensional anharmonic calculations. Nuclear ground state probability density histograms obtained from the QDMC and RBDMC simulations indicate the H{sub 2} molecule is delocalized above the Li{sup +}-benzene system at 0 K.« less

  13. Improving the Diagnostic Specificity of CT for Early Detection of Lung Cancer: 4D CT-Based Pulmonary Nodule Elastometry

    DTIC Science & Technology

    2013-08-01

    as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the use of B- spline ...FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence...Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions on Medical Imaging. 2001;20(6):526-34

  14. Various forms of indexing HDMR for modelling multivariate classification problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksu, Çağrı; Tunga, M. Alper

    2014-12-10

    The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less

  15. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation

    PubMed Central

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K.

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348

  16. CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.

    PubMed

    Wilke, Marko; Altaye, Mekibib; Holland, Scott K

    2017-01-01

    Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.

  17. Upsampling to 400-ms Resolution for Assessing Effective Connectivity in Functional Magnetic Resonance Imaging Data with Granger Causality

    PubMed Central

    Kerr, Deborah L.; Nitschke, Jack B.

    2013-01-01

    Abstract Granger causality analysis of functional magnetic resonance imaging (fMRI) blood-oxygen-level-dependent signal data allows one to infer the direction and magnitude of influence that brain regions exert on one another. We employed a method for upsampling the time resolution of fMRI data that does not require additional interpolation beyond the interpolation that is regularly used for slice-timing correction. The mathematics for this new method are provided, and simulations demonstrate its viability. Using fMRI, 17 snake phobics and 19 healthy controls viewed snake, disgust, and neutral fish video clips preceded by anticipatory cues. Multivariate Granger causality models at the native 2-sec resolution and at the upsampled 400-ms resolution assessed directional associations of fMRI data among 13 anatomical regions of interest identified in prior research on anxiety and emotion. Superior sensitivity was observed for the 400-ms model, both for connectivity within each group and for group differences in connectivity. Context-dependent analyses for the 400-ms multivariate Granger causality model revealed the specific trial types showing group differences in connectivity. This is the first demonstration of effective connectivity of fMRI data using a method for achieving 400-ms resolution without sacrificing accuracy available at 2-sec resolution. PMID:23134194

  18. Geostatistical interpolation of hourly precipitation from rain gauges and radar for a large-scale extreme rainfall event

    NASA Astrophysics Data System (ADS)

    Haberlandt, Uwe

    2007-01-01

    SummaryThe methods kriging with external drift (KED) and indicator kriging with external drift (IKED) are used for the spatial interpolation of hourly rainfall from rain gauges using additional information from radar, daily precipitation of a denser network, and elevation. The techniques are illustrated using data from the storm period of the 10th to the 13th of August 2002 that led to the extreme flood event in the Elbe river basin in Germany. Cross-validation is applied to compare the interpolation performance of the KED and IKED methods using different additional information with the univariate reference methods nearest neighbour (NN) or Thiessen polygons, inverse square distance weighting (IDW), ordinary kriging (OK) and ordinary indicator kriging (IK). Special attention is given to the analysis of the impact of the semivariogram estimation on the interpolation performance. Hourly and average semivariograms are inferred from daily, hourly and radar data considering either isotropic or anisotropic behaviour using automatic and manual fitting procedures. The multivariate methods KED and IKED clearly outperform the univariate ones with the most important additional information being radar, followed by precipitation from the daily network and elevation, which plays only a secondary role here. The best performance is achieved when all additional information are used simultaneously with KED. The indicator-based kriging methods provide, in some cases, smaller root mean square errors than the methods, which use the original data, but at the expense of a significant loss of variance. The impact of the semivariogram on interpolation performance is not very high. The best results are obtained using an automatic fitting procedure with isotropic variograms either from hourly or radar data.

  19. Spatial distribution of soil moisture and hydrophobicity in the immediate period after a grassland fire in Lithuania

    NASA Astrophysics Data System (ADS)

    Pereira, P.; Pundyte, N.; Vaitkute, D.; Cepanko, V.; Pranskevicius, M.; Ubeda, X.; Mataix-Solera, J.; Cerda, A.

    2012-04-01

    Fire can affect significantly soil moisture (SM) and water repellency (WR) in the immediate period after the fire due the effect of the temperatures into soil profile and ash. This impact can be very heterogeneous, even in small distances, due to different conditions of combustion (e.g. fuel and soil moisture, fuel amount and type, distribution and connection, and geomorphological variables as aspect and slope) that influences fire temperature and severity. The aim of this work it is study the spatial distribution of SM and WR in a small plot (400 m2 with a sampling distance of 5 m) immediately after the a low severity grassland fire.. This was made in a burned but also in a control (unburned) plot as reference to can compare. In each plot we analyzed a total of 25 samples. SM was measured gravimetrically and WR with the water drop penetration time test (WDPT). Several interpolation methods were tested in order to identify the best predictor of SM and WR, as the Inverse Distance to a Weight (IDW) (with the power of 1,2,3,4 and 5), Local Polynomial with the first and second polynomial order, Polynomial Regression (PR), Radial Basis Functions (RBF) as Multilog (MTG), Natural Cubic Spline (NCS), Multiquadratic (MTQ), Inverse Multiquadratic (IMTQ) and Thin Plate Spline (TPS) and Ordinary Kriging. Interpolation accuracy was observed with the cross-validation method that is achieved by taking each observation in turn out of the sample and estimating from the remaining ones. The errors produced in each interpolation allowed us to calculate the Root Mean Square Error (RMSE). The best method is the one that showed the lower RMSE. The results showed that on average the SM in the control plot was 13.59 % (±2.83) and WR 2.9 (±1.3) seconds (s). The majority of the soils (88%) were hydrophilic (WDPT <5s). SM in the control plot showed a weak negative relationship with WR (r=-0.33, p<0.10). The coefficient of variation (CV%) of SM was 20.77% and SW of 44.62%. In the burned plot, SM was 14.17% (±2.83) and WR of 151 (±99) seconds (s). All the samples analysed were considered hydrophobic (WDPT >5s). We did not identify significant relationships among the variables (r=0.06, p>0.05) and the CV% was higher in WR (65.85%) than SM (19.96%). Overall we identified no significant changes in SM between plots, which means that fire did not had important implications on soil water content, contrary to observed in WR. The same dynamic was observed in the CV%. Among all tested methods the most accurate to interpolate SM, in the control plot IDW 1 and in the burned plot IDW 2, and this means that fire did not induce important inferences on the spatial distribution of SM. In WR, in the control plot, the best predictor was NCS and in the burned plot was IDW 1 and this means that spatial distribution WR was substantially affected by fire. In this case we observed an increase of the small scale variability in the burned area. Currently we are monitoring this burned area and observing the evaluation of the spatial variability of these two soil properties. It is important to observe their dynamic in the space and time and observe if fire will have medium and long term implications on SM and WR. Discussions about the results will be carried out during the poster session.

  20. TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)

    EPA Science Inventory

    A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...

  1. Chronological Age, Cognitions, and Practices in European American Mothers: A Multivariate Study of Parenting

    PubMed Central

    Bornstein, Marc H.; Putnick, Diane L.

    2018-01-01

    We studied multiple parenting cognitions and practices in European American mothers (N = 262) who ranged in age from 15 to 47 years. All were first-time parents of 20-month-old children. Some age effects were zero; others were linear or nonlinear. Nonlinear age effects determined by spline regression showed significant associations to a “knot” age (~30 years) with little or no association afterward. For parenting cognitions and practices that are age-sensitive, a two-phase model of parental development is proposed. These findings stress the importance of considering maternal chronological age as a factor in developmental study. PMID:17605519

  2. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  3. Analysis/forecast experiments with a multivariate statistical analysis scheme using FGGE data

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1985-01-01

    A three-dimensional, multivariate, statistical analysis method, optimal interpolation (OI) is described for modeling meteorological data from widely dispersed sites. The model was developed to analyze FGGE data at the NASA-Goddard Laboratory of Atmospherics. The model features a multivariate surface analysis over the oceans, including maintenance of the Ekman balance and a geographically dependent correlation function. Preliminary comparisons are made between the OI model and similar schemes employed at the European Center for Medium Range Weather Forecasts and the National Meteorological Center. The OI scheme is used to provide input to a GCM, and model error correlations are calculated for forecasts of 500 mb vertical water mixing ratios and the wind profiles. Comparisons are made between the predictions and measured data. The model is shown to be as accurate as a successive corrections model out to 4.5 days.

  4. In vivo MRS and MRSI: Performance analysis, measurement considerations and evaluation of metabolite concentration images

    NASA Astrophysics Data System (ADS)

    Vikhoff-Baaz, Barbro

    2000-10-01

    The doctoral thesis concerns development, evaluation and performance of quality assessment methods for volume- selection methods in 31P and 1H MR spectroscopy (MRS). It also contains different aspects of the measurement procedure for 1H MR spectroscopic imaging (MRSI) with application on the human brain, image reconstruction of the MRSI images and evaluation methods for lateralization of temporal lobe epilepsy (TLE). Two complementary two-compartment phantoms and evaluation methods for quality assessment of 31P MRS in small-bore MR systems were presented. The first phantom consisted of an inner cube inside a sphere phantom where measurements with and without volume selection where compared for various VOI sizes. The multi-centre showed that the evaluated parameters provide useful information of the performance of volume-selective MRS at the MR system. The second phantom consisted of two compartments divided by a very thin wall and was found useful for measurements of the appearance and position of the VOI profile in specific gradient directions. The second part concerned 1H MRS and MRSI of whole-body MR systems. Different factors that may degrade or complicate the measurement procedure like for MRSI were evaluated, e.g. the volume selection performance, contamination, susceptibility and motion. Two interpolation methods for reconstruction of MRSI images were compared. Measurements and computer simulations showed that Fourier interpolation correctly visualizes the information inherent in the data set, while the results were dependent on the position of the object relative the original matrix using Cubic spline interpolation. Application of spatial filtering may improve the image representation of the data. Finally, 1H MRSI was performed on healthy volunteers and patients with temporal lobe epilepsy (TLE). Metabolite concentration images were used for lateralization of TLE, where the signal intensity in the two hemispheres were compared. Visual analysis of the metabolite concentration images can, with high accuracy, be used for lateralization in routine examinations. Analysis from measurements with region-of-interests (ROI) in different locations gives quantitative information about the degree of signal loss and the spatial distribution.

  5. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  6. Interpolation of the Radial Velocity Data from Coastal HF Radars

    DTIC Science & Technology

    2013-01-01

    practical applications and may help to solve many environmental problems caused by human activity. References [1] Alvera -Azcarate A., A. Barth, M. Rixen...surface temperature, Ocean Modelling, 9,325-346. [2] Alvera -Azcarate, A., A. Barth,. J.-M. Beckers, and R. H. Weisber, 2007: Multivari- ate...predictions from the global Navy Coastal Ocean Model (NCOM) dur- ing 1998-2001,7. Atmos. Oceanic TechnoL, 21(12), 1876-1894. [4] Barth, A., Alvera

  7. Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.; Horn, P. W.

    1994-01-01

    Most of the time on this project was spent on the trajectory planning problem. The construction is equivalent to the classical spline construction in the case that the system matrix is nilpotent. If the dimension of the system is n then the spline of degree 2n-1 is constructed. This gives a new approach to the construction of splines that is more efficient than the usual construction and at the same time allows the construction of a much larger class of splines. All known classes of splines are reconstructed using the approach of linear control theory. As a numerical analysis tool control theory gives a very good tool for constructing splines. However, for the purposes of trajectory planning it is quite another story. Enclosed in this document are four reports done under this grant.

  8. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Keller, J.; Wallen, R.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  9. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  10. Dynamic data-driven integrated flare model based on self-organized criticality

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M. K.

    2013-05-01

    Context. We interpret solar flares as events originating in active regions that have reached the self-organized critical state. We describe them with a dynamic integrated flare model whose initial conditions and driving mechanism are derived from observations. Aims: We investigate whether well-known scaling laws observed in the distribution functions of characteristic flare parameters are reproduced after the self-organized critical state has been reached. Methods: To investigate whether the distribution functions of total energy, peak energy, and event duration follow the expected scaling laws, we first applied the previously reported static cellular automaton model to a time series of seven solar vector magnetograms of the NOAA active region 8210 recorded by the Imaging Vector Magnetograph on May 1 1998 between 18:59 UT and 23:16 UT until the self-organized critical state was reached. We then evolved the magnetic field between these processed snapshots through spline interpolation, mimicking a natural driver in our dynamic model. We identified magnetic discontinuities that exceeded a threshold in the Laplacian of the magnetic field after each interpolation step. These discontinuities were relaxed in local diffusion events, implemented in the form of cellular automaton evolution rules. Subsequent interpolation and relaxation steps covered all transitions until the end of the processed magnetograms' sequence. We additionally advanced each magnetic configuration that has reached the self-organized critical state (SOC configuration) by the static model until 50 more flares were triggered, applied the dynamic model again to the new sequence, and repeated the same process sufficiently often to generate adequate statistics. Physical requirements, such as the divergence-free condition for the magnetic field, were approximately imposed. Results: We obtain robust power laws in the distribution functions of the modeled flaring events with scaling indices that agree well with observations. Peak and total flare energy obey single power laws with indices -1.65 ± 0.11 and -1.47 ± 0.13, while the flare duration is best fitted with a double power law (-2.15 ± 0.15 and -3.60 ± 0.09 for the flatter and steeper parts, respectively). Conclusions: We conclude that well-known statistical properties of flares are reproduced after active regions reach the state of self-organized criticality. A significant enhancement of our refined cellular automaton model is that it initiates and further drives the simulation from observed evolving vector magnetograms, thus facilitating energy calculation in physical units, while a separation between MHD and kinetic timescales is possible by assigning distinct MHD timestamps to each interpolation step.

  11. Efficient Global Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2012-01-01

    A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.

  12. User Selection Criteria of Airspace Designs in Flexible Airspace Management

    NASA Technical Reports Server (NTRS)

    Lee, Hwasoo E.; Lee, Paul U.; Jung, Jaewoo; Lai, Chok Fung

    2011-01-01

    A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.

  13. On the spline-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.

  14. An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry

    DTIC Science & Technology

    2015-12-01

    ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  15. Aerodynamic Shape Optimization of a Dual-Stream Supersonic Plug Nozzle

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.; Park, Michael A.; Nielsen, Eric J.; Carlson, Jan-Renee

    2015-01-01

    Aerodynamic shape optimization was performed on an isolated axisymmetric plug nozzle sized for a supersonic business jet. The dual-stream concept was tailored to attenuate nearfield pressure disturbances without compromising nozzle performance. Adjoint-based anisotropic mesh refinement was applied to resolve nearfield compression and expansion features in the baseline viscous grid. Deformed versions of the adapted grid were used for subsequent adjoint-driven shape optimization. For design, a nonlinear gradient-based optimizer was coupled to the discrete adjoint formulation of the Reynolds-averaged Navier- Stokes equations. All nozzle surfaces were parameterized using 3rd order B-spline interpolants and perturbed axisymmetrically via free-form deformation. Geometry deformations were performed using 20 design variables shared between the outer cowl, shroud and centerbody nozzle surfaces. Interior volume grid deformation during design was accomplished using linear elastic mesh morphing. The nozzle optimization was performed at a design cruise speed of Mach 1.6, assuming core and bypass pressure ratios of 6.19 and 3.24, respectively. Ambient flight conditions at design were commensurate with 45,000-ft standard day atmosphere.

  16. Nanometer-scale displacement sensing using self-mixing interferometry with a correlation-based signal processing technique

    NASA Astrophysics Data System (ADS)

    Hast, J.; Okkonen, M.; Heikkinen, H.; Krehut, L.; Myllylä, R.

    2006-06-01

    A self-mixing interferometer is proposed to measure nanometre-scale optical path length changes in the interferometer's external cavity. As light source, the developed technique uses a blue emitting GaN laser diode. An external reflector, a silicon mirror, driven by a piezo nanopositioner is used to produce an interference signal which is detected with the monitor photodiode of the laser diode. Changing the optical path length of the external cavity introduces a phase difference to the interference signal. This phase difference is detected using a signal processing algorithm based on Pearson's correlation coefficient and cubic spline interpolation techniques. The results show that the average deviation between the measured and actual displacements of the silicon mirror is 3.1 nm in the 0-110 nm displacement range. Moreover, the measured displacements follow linearly the actual displacement of the silicon mirror. Finally, the paper considers the effects produced by the temperature and current stability of the laser diode as well as dispersion effects in the external cavity of the interferometer. These reduce the sensor's measurement accuracy especially in long-term measurements.

  17. A new ab initio potential energy surface of LiClH (1A') system and quantum dynamics calculation for Li + HCl (v = 0, j = 0-2) → LiCl + H reaction

    NASA Astrophysics Data System (ADS)

    Tan, Rui Shan; Zhai, Huan Chen; Yan, Wei; Gao, Feng; Lin, Shi Ying

    2017-04-01

    A new ab initio potential energy surface (PES) for the ground state of Li + HCl reactive system has been constructed by three-dimensional cubic spline interpolation of 36 654 ab initio points computed at the MRCI+Q/aug-cc-pV5Z level of theory. The title reaction is found to be exothermic by 5.63 kcal/mol (9 kcal/mol with zero point energy corrections), which is very close to the experimental data. The barrier height, which is 2.99 kcal/mol (0.93 kcal/mol for the vibrationally adiabatic barrier height), and the depth of van der Waals minimum located near the entrance channel are also in excellent agreement with the experimental findings. This study also identified two more van der Waals minima. The integral cross sections, rate constants, and their dependence on initial rotational states are calculated using an exact quantum wave packet method on the new PES. They are also in excellent agreement with the experimental measurements.

  18. Coupled 2D-3D finite element method for analysis of a skin panel with a discontinuous stiffener

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Lotts, C. G.; Davis, D. D., Jr.; Krishnamurthy, T.

    1992-01-01

    This paper describes a computationally efficient analysis method which was used to predict detailed stress states in a typical composite compression panel with a discontinuous hat stiffener. A global-local approach was used. The global model incorporated both 2D shell and 3D brick elements connected by newly developed transition elements. Most of the panel was modeled with 2D elements, while 3D elements were employed to model the stiffener flange and the adjacent skin. Both linear and geometrically nonlinear analyses were performed on the global model. The effect of geometric nonlinearity induced by the eccentric load path due to the discontinuous hat stiffener was significant. The local model used a fine mesh of 3D brick elements to model the region at the end of the stiffener. Boundary conditions of the local 3D model were obtained by spline interpolation of the nodal displacements from the global analysis. Detailed in-plane and through-the-thickness stresses were calculated in the flange-skin interface near the end of the stiffener.

  19. Semi-automatic delineation of the spino-laminar junction curve on lateral x-ray radiographs of the cervical spine

    NASA Astrophysics Data System (ADS)

    Narang, Benjamin; Phillips, Michael; Knapp, Karen; Appelboam, Andy; Reuben, Adam; Slabaugh, Greg

    2015-03-01

    Assessment of the cervical spine using x-ray radiography is an important task when providing emergency room care to trauma patients suspected of a cervical spine injury. In routine clinical practice, a physician will inspect the alignment of the cervical spine vertebrae by mentally tracing three alignment curves along the anterior and posterior sides of the cervical vertebral bodies, as well as one along the spinolaminar junction. In this paper, we propose an algorithm to semi-automatically delineate the spinolaminar junction curve, given a single reference point and the corners of each vertebral body. From the reference point, our method extracts a region of interest, and performs template matching using normalized cross-correlation to find matching regions along the spinolaminar junction. Matching points are then fit to a third order spline, producing an interpolating curve. Experimental results demonstrate promising results, on average producing a modified Hausdorff distance of 1.8 mm, validated on a dataset consisting of 29 patients including those with degenerative change, retrolisthesis, and fracture.

  20. Mapping near-surface air temperature, pressure, relative humidity and wind speed over Mainland China with high spatiotemporal resolution

    NASA Astrophysics Data System (ADS)

    Li, Tao; Zheng, Xiaogu; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Zhang, Shupeng; Wu, Guocan; Wang, Zhonglei; Huang, Chengcheng; Shen, Yan; Liao, Rongwei

    2014-09-01

    As part of a joint effort to construct an atmospheric forcing dataset for mainland China with high spatiotemporal resolution, a new approach is proposed to construct gridded near-surface temperature, relative humidity, wind speed and surface pressure with a resolution of 1 km×1 km. The approach comprises two steps: (1) fit a partial thin-plate smoothing spline with orography and reanalysis data as explanatory variables to ground-based observations for estimating a trend surface; (2) apply a simple kriging procedure to the residual for trend surface correction. The proposed approach is applied to observations collected at approximately 700 stations over mainland China. The generated forcing fields are compared with the corresponding components of the National Centers for Environmental Prediction (NCEP) Climate Forecast System Reanalysis dataset and the Princeton meteorological forcing dataset. The comparison shows that, both within the station network and within the resolutions of the two gridded datasets, the interpolation errors of the proposed approach are markedly smaller than the two gridded datasets.

  1. Single-step collision-free trajectory planning of biped climbing robots in spatial trusses.

    PubMed

    Zhu, Haifei; Guan, Yisheng; Chen, Shengjun; Su, Manjia; Zhang, Hong

    For a biped climbing robot with dual grippers to climb poles, trusses or trees, feasible collision-free climbing motion is inevitable and essential. In this paper, we utilize the sampling-based algorithm, Bi-RRT, to plan single-step collision-free motion for biped climbing robots in spatial trusses. To deal with the orientation limit of a 5-DoF biped climbing robot, a new state representation along with corresponding operations including sampling, metric calculation and interpolation is presented. A simple but effective model of a biped climbing robot in trusses is proposed, through which the motion planning of one climbing cycle is transformed to that of a manipulator. In addition, the pre- and post-processes are introduced to expedite the convergence of the Bi-RRT algorithm and to ensure the safe motion of the climbing robot near poles as well. The piecewise linear paths are smoothed by utilizing cubic B-spline curve fitting. The effectiveness and efficiency of the presented Bi-RRT algorithm for climbing motion planning are verified by simulations.

  2. Ionospheric modelling to boost the PPP-RTK positioning and navigation in Australia

    NASA Astrophysics Data System (ADS)

    Arsov, Kirco; Terkildsen, Michael; Olivares, German

    2017-04-01

    This paper deals with implementation of 3-D ionospheric model to support the GNSS positioning and navigation activities in Australia. We will introduce two strategies for Slant Total Electron Content (STEC) estimation from GNSS CORS sites in Australia. In the first scenario, the STEC is estimated in the PPP-RTK network processing. The ionosphere is estimated together with other GNSS network parameters, such as Satellite Clocks, Satellite Phase Biases, etc. Another approach is where STEC is estimated on a station by station basis by taking advantage of already known station position and different satellite ambiguities relations. Accuracy studies and considerations will be presented and discussed. Furthermore, based on this STEC, 3-D ionosphere modeling will be performed. We will present the simple interpolation, 3-D Tomography and bi-cubic splines as modeling techniques. In order to assess these models, a (user) PPP-RTK test bed is established and a sensitivity matrix will be introduced and analyzed based on time to first fix (TTFF) of ambiguities, positioning accuracy, PPP-RTK solution convergence time etc. Different spatial configurations and constellations will be presented and assessed.

  3. Bilinear modeling and nonlinear estimation

    NASA Technical Reports Server (NTRS)

    Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.

    1989-01-01

    New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.

  4. Multivariate missing data in hydrology - Review and applications

    NASA Astrophysics Data System (ADS)

    Ben Aissia, Mohamed-Aymen; Chebana, Fateh; Ouarda, Taha B. M. J.

    2017-12-01

    Water resources planning and management require complete data sets of a number of hydrological variables, such as flood peaks and volumes. However, hydrologists are often faced with the problem of missing data (MD) in hydrological databases. Several methods are used to deal with the imputation of MD. During the last decade, multivariate approaches have gained popularity in the field of hydrology, especially in hydrological frequency analysis (HFA). However, treating the MD remains neglected in the multivariate HFA literature whereas the focus has been mainly on the modeling component. For a complete analysis and in order to optimize the use of data, MD should also be treated in the multivariate setting prior to modeling and inference. Imputation of MD in the multivariate hydrological framework can have direct implications on the quality of the estimation. Indeed, the dependence between the series represents important additional information that can be included in the imputation process. The objective of the present paper is to highlight the importance of treating MD in multivariate hydrological frequency analysis by reviewing and applying multivariate imputation methods and by comparing univariate and multivariate imputation methods. An application is carried out for multiple flood attributes on three sites in order to evaluate the performance of the different methods based on the leave-one-out procedure. The results indicate that, the performance of imputation methods can be improved by adopting the multivariate setting, compared to mean substitution and interpolation methods, especially when using the copula-based approach.

  5. A User's Version View of a Robustified, Bayesian Weighted Least-Squares Recursive Algorithm for Interpolating AVHRR-NDVI Data: Applications to an Animated Visualization of the Phenology of a Semi-Arid Study Area

    NASA Astrophysics Data System (ADS)

    Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.

    2005-12-01

    In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1.5 months. Instabilities during large time gaps in the data are suppressed by introducing an expectation of minimum roughness on the fitted time series. Our next significant computational step involves a constrained least squares fit to the observed NDVI data. Residuals between the observed NDVI value and the predicted starting model are computed, and the inverse of these residuals provide the weights for a weighted least squares analysis whereby a set of annual eighth-order splines are fit to the 7 years of NDVI data. Although a series of independent 8-th order annual functionals over a period of 7 years is intrinsically unstable when there are significant data gaps, the splined versions for this specific application are quite stable due to explicit continuity conditions on the values and derivatives of the functionals across contiguous years, as well as a priori constraints on the predicted values vis-a-vis the assumed initial model. Our procedure allows us to robustly interpolate original unequally-spaced NDVI data with a new time series having the most-appropriate, user-defined time base. We apply this approach to the temporal behavior of vegetation in our 150 x 150 km study area. Such a small area, being so rich in vegetation diversity, is particularly useful to view in map form and by animated annual and multi-year time sequences, since the interrelation between phenology, topography and specific usage patterns becomes clear.

  6. Technical Note: spektr 3.0-A computational tool for x-ray spectrum modeling and analysis.

    PubMed

    Punnoose, J; Xu, J; Sisniega, A; Zbijewski, W; Siewerdsen, J H

    2016-08-01

    A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. The spektr code generates x-ray spectra (photons/mm(2)/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20-150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30-140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available.

  7. Eigen-disfigurement model for simulating plausible facial disfigurement after reconstructive surgery.

    PubMed

    Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K

    2015-03-27

    Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.

  8. GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data

    NASA Astrophysics Data System (ADS)

    Ibrahim, Noor Akma; Suliadi

    2010-11-01

    In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.

  9. Smoothing Spline ANOVA Decomposition of Arbitrary Splines: An Application to Eye Movements in Reading

    PubMed Central

    Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias

    2015-01-01

    The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading. PMID:25816246

  10. Spatial variability of soil magnetic susceptibility in an agricultural field located in Eastern Ukraine

    NASA Astrophysics Data System (ADS)

    Menshov, Oleksandr; Pereira, Paulo; Kruglov, Oleksandr

    2015-04-01

    Magnetic susceptibility (MS) have been used to characterize soil properties. It gives an indirect information about heavy metals content and degree of human impacts on soil contamination derived from atmospheric pollution (Girault et al., 2011). This method is inexpensive in relation to chemical analysis and very useful to track soil pollution, since several toxic components deposited on soil surface are rich in particulates produced by oxidation processes (Boyko et al., 2004; Morton-Bernea et al., 2009). Thus, identify the spatial distribution of MS is of major importance, since can give an indirect information of high metals content (Dankoub et al., 2012). This allows also to distinguish the pedogenic and technogenic origin magnetic signal. For example Ukraine chernozems contain fine-grained oxidized magnetite and maghemite of pedogenic origin formed by weathering of the parent material (Jeleńska et al., 2004). However, to a correct understanding of variables distribution, the identification of the most accurate interpolation method is fundamental for a better interpretation of map information (Pereira et al., 2013). The objective of this work is to study the spatial variability of soil MS in an agricultural fields located in the Tcherkascy Tishki area (50.11°N, 36.43 °E, 162 m a.s.l), Ukraine. Soil MS was measured in 77 sampling points in a north facing slope. To estimate the best interpolation method, several interpolation methods were tested, as inverse distance to a weight (IDW) with the power of 1,2,3,4 and 5, Local Polynomial (LP) with the power of 1 and 2, Global Polynomial (GP), radial basis functions - spline with tension (SPT), completely regularized spline (CRS), multiquatratic (MTQ), inverse multiquatratic (IMTQ), and thin plate spline (TPS) - and some geostatistical methods as, ordinary kriging (OK), Simple Kriging (SK) and Universal Kriging (UK), used in previous works (Pereira et al., 2014). On average, the soil MS of the studied plot had 686.05 MS×10-9 m3/kg, and a minimum and a maximum value of 499.33 and 862.27 MS×10-9 m3/kg respectively. The standard deviation was 85.62 and the coefficient of variation 12.48%. This shows that the spatial variability of soil MS was low. The Global Morans I index was of 0.841, a z-score of 7.741 with a p<0.001, indicating that soil MS had a clustered pattern. The variogram results showed that the gaussian model was the the best fitted. The nugget effect was 0.1007. the sill 0.9905 and the nugget/sill ratio of 0.10, which shows that soil MS has a strong spatial dependency. The results of the interpolation tests showed that the errors distribution followed the normal distribution, the average predicted values were similar to the observed and the correlation between these two distributions was high (between 0.85-0.90) in all the cases. The method that predicted better soil MS was LP2 and the less accurate was SK. Soil MS presented high values in the southwestern part and low in the northeast area of the plot. It is clearly observed a increase of soil MS from the top of the slope to the bottom. Acknowledgments RECARE (Preventing and Remediating Degradation of Soils in Europe Through Land Care, FP7-ENV-2013-TWO STAGE), funded by the European Commission; and for the COST action ES1306 (Connecting European connectivity research). References Boyko, T., Scholger, R., Stanjek, H., MAGPROX team (2004) Topsoil magnetic suseptibility mapping as a tool for pollution monitoring: Repetability of in situ measurments. Journal of Applied Geophysics, 55, 249-259. Dankoub, Z., Ayoubi, S., Khademi, H., Sheng-Gao, L. (2012) Spatial distribution of magnetic properties and selected heavy metals in calcareous soils as affected by land use in the Isfahan Region, Central Iran. Pedosphere, 22, 33-47. Girault, F., Poitou, C., Perrier, F., Koirala, B.P., Bhattarai, M. (2011) Soil characterization using patterns of magnetic susceptibility versus effective radimu concentration. Natural Hazards Earth System Science, 11, 2285-2293. Jeleńska, M., Hasso-Agopsowicz, A., Kopcewicz, B., Sukhorada, A., Tyamina, K., Kądziałko-Hofmokl, M., Matviishina, Z. (2004) Magnetic properties of the profiles of polluted and non-polluted soils. A case study from Ukraine. Geophys. J. Int., 159, 104-116. Morton-Bernea, O., Hernandez, E., Martinez-Pichardo, E., Soler-Arechalde, A.M., Santa Cruz, R.L., Gonzalez-Hernandez, G., Beramendi-Orosco, L., Urritia-Fucugaushi, J. (2009) Mexico city topsoils: Heavy metals vs. magnetic susceptibility. Geoderma, 151, 121-125. Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J. Arcenegui, V., Zavala, L. Modelling the impacts of wildfire on ash thickness in a short-term period, Land Degradation and Development, (In Press), DOI: 10.1002/ldr.2195 Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J., Jordan, A. Burguet, M. (2013) Spatial models for monitoring the spatio-temporal evolution of ashes after fire - a case study of a burnt grassland in Lithuania, Solid Earth, 4, 153-165.

  11. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  12. Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes

    NASA Technical Reports Server (NTRS)

    Abgrall, Remi

    1992-01-01

    An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.

  13. Improving the Diagnostic Specificity of CT for Early Detection of Lung Cancer: 4D CT-Based Pulmonary Nodule Elastometry

    DTIC Science & Technology

    2013-08-01

    transformation models, such as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the...research project. References: 1. Bookstein FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern...Rohr K, Stiehl HS, Sprengel R, Buzug TM, Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions

  14. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  15. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruberti, M.; Averbukh, V.; Decleva, P.

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also presentmore » the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.« less

  16. Multicategorical Spline Model for Item Response Theory.

    ERIC Educational Resources Information Center

    Abrahamowicz, Michal; Ramsay, James O.

    1992-01-01

    A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)

  17. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  18. Fitting multidimensional splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  19. Analyzing degradation data with a random effects spline regression model

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-03-17

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  20. Analyzing degradation data with a random effects spline regression model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  1. Numerical solution of system of boundary value problems using B-spline with free parameter

    NASA Astrophysics Data System (ADS)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  2. Fully probabilistic control design in an adaptive critic framework.

    PubMed

    Herzallah, Randa; Kárný, Miroslav

    2011-12-01

    Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  4. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.

  5. Conformal Solid T-spline Construction from Boundary T-spline Representations

    DTIC Science & Technology

    2012-07-01

    TITLE AND SUBTITLE Conformal Solid T-spline Construction from Boundary T-spline Representations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...Zhang’s ONR-YIP award N00014-10-1-0698 and an ONR Grant N00014-08-1-0653. The work of T. J.R. Hughes was supported by ONR Grant N00014-08-1-0992, NSF...GOALI CMI-0700807/0700204, NSF CMMI-1101007 and a SINTEF grant UTA10-000374. References 1. M. Aigner, C. Heinrich, B. Jüttler, E. Pilgerstorfer, B

  6. Random regression analyses using B-splines to model growth of Australian Angus cattle

    PubMed Central

    Meyer, Karin

    2005-01-01

    Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011

  7. Methods for Characterizing Fine Particulate Matter Using Satellite Remote-Sensing Data and Ground Observations: Potential Use for Environmental Public Health Surveillance

    NASA Technical Reports Server (NTRS)

    Al-Hamdan, Mohammad Z.; Crosson, William L.; Limaye, Ashutosh S.; Rickman, Douglas L.; Quattrochi, Dale A.; Estes, Maurice G.; Qualters, Judith R.; Niskar, Amanda S.; Sinclair, Amber H.; Tolsma, Dennis D.; hide

    2007-01-01

    This study describes and demonstrates different techniques for surfacing daily environmental / hazards data of particulate matter with aerodynamic diameter less than or equal to 2.5 micrometers (PM2.5) for the purpose of integrating respiratory health and environmental data for the Centers for Disease Control and Prevention (CDC s) pilot study of Health and Environment Linked for Information Exchange (HELIX)-Atlanta. It described a methodology for estimating ground-level continuous PM2.5 concentrations using B-Spline and inverse distance weighting (IDW) surfacing techniques and leveraging National Aeronautics and Space Administration (NASA) Moderate Resolution Imaging Spectrometer (MODIS) data to complement The Environmental Protection Agency (EPA) ground observation data. The study used measurements of ambient PM2.5 from the EPA database for the year 2003 as well as PM2.5 estimates derived from NASA s satellite data. Hazard data have been processed to derive the surrogate exposure PM2.5 estimates. The paper has shown that merging MODIS remote sensing data with surface observations of PM2.5 not only provides a more complete daily representation of PM2.5 than either data set alone would allow, but it also reduces the errors in the PM2.5 estimated surfaces. The results of this paper have shown that the daily IDW PM2.5 surfaces had smaller errors, with respect to observations, than those of the B-Spline surfaces in the year studied. However the IDW mean annual composite surface had more numerical artifacts, which could be due to the interpolating nature of the IDW that assumes that the maxima and minima can occur only at the observation points. Finally, the methods discussed in this paper improve temporal and spatial resolutions and establish a foundation for environmental public health linkage and association studies for which determining the concentrations of an environmental hazard such as PM2.5 with good accuracy levels is critical.

  8. Measurement and reconstruction of the leaflet geometry for a pericardial artificial heart valve.

    PubMed

    Jiang, Hongjun; Campbell, Gord; Xi, Fengfeng

    2005-03-01

    This paper describes the measurement and reconstruction of the leaflet geometry for a pericardial heart valve. Tasks involved include mapping the leaflet geometries by laser digitizing and reconstructing the 3D freeform leaflet surface based on a laser scanned profile. The challenge is to design a prosthetic valve that maximizes the benefits offered to the recipient as compared to the normally operating naturally-occurring valve. This research was prompted by the fact that artificial heart valve bioprostheses do not provide long life durability comparable to the natural heart valve, together with the anticipated benefits associated with defining the valve geometries, especially the leaflet geometries for the bioprosthetic and human valves, in order to create a replicate valve fabricated from synthetic materials. Our method applies the concept of reverse engineering in order to reconstruct the freeform surface geometry. A Brown & Shape coordinate measuring machine (CMM) equipped with a HyMARC laser-digitizing system was used to measure the leaflet profiles of a Baxter Carpentier-Edwards pericardial heart valve. The computer software, Polyworks was used to pre-process the raw data obtained from the scanning, which included merging images, eliminating duplicate points, and adding interpolated points. Three methods, creating a mesh model from cloud points, creating a freeform surface from cloud points, and generating a freeform surface by B-splines are presented in this paper to reconstruct the freeform leaflet surface. The mesh model created using Polyworks can be used for rapid prototyping and visualization. To fit a freeform surface to cloud points is straightforward but the rendering of a smooth surface is usually unpredictable. A surface fitted by a group of B-splines fitted to cloud points was found to be much smoother. This method offers the possibility of manually adjusting the surface curvature, locally. However, the process is complex and requires additional manipulation. Finally, this paper presents a reverse engineered design for the pericardial heart valve which contains three identical leaflets with reconstructed geometry.

  9. Methods for characterizing fine particulate matter using ground observations and remotely sensed data: potential use for environmental public health surveillance.

    PubMed

    Al-Hamdan, Mohammad Z; Crosson, William L; Limaye, Ashutosh S; Rickman, Douglas L; Quattrochi, Dale A; Estes, Maurice G; Qualters, Judith R; Sinclair, Amber H; Tolsma, Dennis D; Adeniyi, Kafayat A; Niskar, Amanda Sue

    2009-07-01

    This study describes and demonstrates different techniques for surface fitting daily environmental hazards data of particulate matter with aerodynamic diameter less than or equal to 2.5 microm (PM2.5) for the purpose of integrating respiratory health and environmental data for the Centers for Disease Control and Prevention (CDC) pilot study of Health and Environment Linked for Information Exchange (HELIX)-Atlanta. It presents a methodology for estimating daily spatial surfaces of ground-level PM2.5 concentrations using the B-Spline and inverse distance weighting (IDW) surface-fitting techniques, leveraging National Aeronautics and Space Administration (NASA) Moderate Resolution Imaging Spectrometer (MODIS) data to complement U.S. Environmental Protection Agency (EPA) ground observation data. The study used measurements of ambient PM2.5 from the EPA database for the year 2003 as well as PM2.5 estimates derived from NASA's satellite data. Hazard data have been processed to derive the surrogate PM2.5 exposure estimates. This paper shows that merging MODIS remote sensing data with surface observations of PM,2. not only provides a more complete daily representation of PM,2. than either dataset alone would allow, but it also reduces the errors in the PM2.5-estimated surfaces. The results of this study also show that although the IDW technique can introduce some numerical artifacts that could be due to its interpolating nature, which assumes that the maxima and minima can occur only at the observation points, the daily IDW PM2.5 surfaces had smaller errors in general, with respect to observations, than those of the B-Spline surfaces. Finally, the methods discussed in this paper establish a foundation for environmental public health linkage and association studies for which determining the concentrations of an environmental hazard such as PM2.5 with high accuracy is critical.

  10. Modeling the Spatial and Temporal Variation of Monthly and Seasonal Precipitation on the Nevada Test Site and Vicinity, 1960-2006

    USGS Publications Warehouse

    Blainey, Joan B.; Webb, Robert H.; Magirl, Christopher S.

    2007-01-01

    The Nevada Test Site (NTS), located in the climatic transition zone between the Mojave and Great Basin Deserts, has a network of precipitation gages that is unusually dense for this region. This network measures monthly and seasonal variation in a landscape with diverse topography. Precipitation data from 125 climate stations on or near the NTS were used to spatially interpolate precipitation for each month during the period of 1960 through 2006 at high spatial resolution (30 m). The data were collected at climate stations using manual and/or automated techniques. The spatial interpolation method, applied to monthly accumulations of precipitation, is based on a distance-weighted multivariate regression between the amount of precipitation and the station location and elevation. This report summarizes the temporal and spatial characteristics of the available precipitation records for the period 1960 to 2006, examines the temporal and spatial variability of precipitation during the period of record, and discusses some extremes in seasonal precipitation on the NTS.

  11. Modeling positional effects of regulatory sequences with spline transformations increases prediction accuracy of deep neural networks

    PubMed Central

    Avsec, Žiga; Cheng, Jun; Gagneur, Julien

    2018-01-01

    Abstract Motivation Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Results Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Availability and implementation Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at https://github.com/gagneurlab/Manuscript_Avsec_Bioinformatics_2017. Contact avsec@in.tum.de or gagneur@in.tum.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29155928

  12. Batch settling curve registration via image data modeling.

    PubMed

    Derlon, Nicolas; Thürlimann, Christian; Dürrenmatt, David; Villez, Kris

    2017-05-01

    To this day, obtaining reliable characterization of sludge settling properties remains a challenging and time-consuming task. Without such assessments however, optimal design and operation of secondary settling tanks is challenging and conservative approaches will remain necessary. With this study, we show that automated sludge blanket height registration and zone settling velocity estimation is possible thanks to analysis of images taken during batch settling experiments. The experimental setup is particularly interesting for practical applications as it consists of off-the-shelf components only, no moving parts are required, and the software is released publicly. Furthermore, the proposed multivariate shape constrained spline model for image analysis appears to be a promising method for reliable sludge blanket height profile registration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Functional Generalized Structured Component Analysis.

    PubMed

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  14. Gear Spline Coupling Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yi; Errichello, Robert

    2013-08-29

    An analytical model is developed to evaluate the design of a spline coupling. For a given torque and shaft misalignment, the model calculates the number of teeth in contact, tooth loads, stiffnesses, stresses, and safety factors. The analytic model provides essential spline coupling design and modeling information and could be easily integrated into gearbox design and simulation tools.

  15. Design, Test, and Evaluation of a Transonic Axial Compressor Rotor with Splitter Blades

    DTIC Science & Technology

    2013-09-01

    parameters .......................................................17 Figure 13. Third-order spline fit for blade camber line distribution...18 Figure 14. Third-order spline fit for blade thickness distribution .....................................19 Figure 15. Blade...leading edge: third-order spline fit for thickness distribution ...............20 Figure 16. Blade leading edge and trailing edge slope blending

  16. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  17. Modeling of time trends and interactions in vital rates using restricted regression splines.

    PubMed

    Heuer, C

    1997-03-01

    For the analysis of time trends in incidence and mortality rates, the age-period-cohort (apc) model has became a widely accepted method. The considered data are arranged in a two-way table by age group and calendar period, which are mostly subdivided into 5- or 10-year intervals. The disadvantage of this approach is the loss of information by data aggregation and the problems of estimating interactions in the two-way layout without replications. In this article we show how splines can be useful when yearly data, i.e., 1-year age groups and 1-year periods, are given. The estimated spline curves are still smooth and represent yearly changes in the time trends. Further, it is straightforward to include interaction terms by the tensor product of the spline functions. If the data are given in a nonrectangular table, e.g., 5-year age groups and 1-year periods, the period and cohort variables can be parameterized by splines, while the age variable is parameterized as fixed effect levels, which leads to a semiparametric apc model. An important methodological issue in developing the nonparametric and semiparametric models is stability of the estimated spline curve at the boundaries. Here cubic regression splines will be used, which are constrained to be linear in the tails. Another point of importance is the nonidentifiability problem due to the linear dependency of the three time variables. This will be handled by decomposing the basis of each spline by orthogonal projection into constant, linear, and nonlinear terms, as suggested by Holford (1983, Biometrics 39, 311-324) for the traditional apc model. The advantage of using splines for yearly data compared to the traditional approach for aggregated data is the more accurate curve estimation for the nonlinear trend changes and the simple way of modeling interactions between the time variables. The method will be demonstrated with hypothetical data as well as with cancer mortality data.

  18. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1985-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  19. A gap-filling model for eddy covariance latent heat flux: Estimating evapotranspiration of a subtropical seasonal evergreen broad-leaved forest as an example

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ying; Chu, Chia-Ren; Li, Ming-Hsu

    2012-10-01

    SummaryIn this paper we present a semi-parametric multivariate gap-filling model for tower-based measurement of latent heat flux (LE). Two statistical techniques, the principal component analysis (PCA) and a nonlinear interpolation approach were integrated into this LE gap-filling model. The PCA was first used to resolve the multicollinearity relationships among various environmental variables, including radiation, soil moisture deficit, leaf area index, wind speed, etc. Two nonlinear interpolation methods, multiple regressions (MRS) and the K-nearest neighbors (KNNs) were examined with random selected flux gaps for both clear sky and nighttime/cloudy data to incorporate into this LE gap-filling model. Experimental results indicated that the KNN interpolation approach is able to provide consistent LE estimations while MRS presents over estimations during nighttime/cloudy. Rather than using empirical regression parameters, the KNN approach resolves the nonlinear relationship between the gap-filled LE flux and principal components with adaptive K values under different atmospheric states. The developed LE gap-filling model (PCA with KNN) works with a RMSE of 2.4 W m-2 (˜0.09 mm day-1) at a weekly time scale by adding 40% artificial flux gaps into original dataset. Annual evapotranspiration at this study site were estimated at 736 mm (1803 MJ) and 728 mm (1785 MJ) for year 2008 and 2009, respectively.

  20. The parametrization of radio source coordinates in VLBI and its impact on the CRF

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-04-01

    Usually celestial radio sources in the celestial reference frame (CRF) catalog are divided in three categories: defining, special handling, and others. The defining sources are those used for the datum realization of the celestial reference frame, i.e. they are included in the No-Net-Rotation (NNR) constraints to maintain the axis orientation of the CRF, and are modeled with one set of totally constant coordinates. At the current level of precision, the choice of the defining sources has a significant effect on the coordinates. For the ICRF2 295 sources were chosen as defining sources, based on their geometrical distribution, statistical properties, and stability. The number of defining sources is a compromise between the reliability of the datum, which increases with the number of sources, and the noise which is introduced by each source. Thus, the optimal number of defining sources is a trade-off between reliability, geometry, and precision. In the ICRF2 only 39 of sources were sorted into the special handling group as they show large fluctuations in their position, therefore they are excluded from the NNR conditions and their positions are normally estimated for each VLBI session instead of as global parameters. All the remaining sources are classified as others. However, a large fraction of these unstable sources show other favorable characteristics, e.g. large flux density (brightness) and a long history of observations. Thus, it would prove advantageous including these sources into the NNR condition. However, the instability of these objects inhibit this. If the coordinate model of these sources would be extended, it would be possible to use these sources for the NNR condition as well. All other sources are placed in the "others" group. This is the largest group of sources, containing those which have not shown any very problematic behavior, but still do not fulfill the requirements for defining sources. Studies show that the behavior of each source can vary dramatically in time. Hence, each source would have to be modeled individually. Considering this, the shear amount of sources, in our study more than 600 are included, sets practical limitations. We decided to use the multivariate adaptive regression splines (MARS) procedure to parametrize the source coordinates, as they allow a great deal of automation as it combines recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and thus the best number of polynomial pieces to fit the data. We investigate linear and cubic splines determined by MARS to "human" determined linear splines and their impact on the CRF. Within this work we try to answer the following questions: How can we find optimal criteria for the definition of the defining and unstable sources? What are the best polynomials for the individual categories? How much can we improve the CRF by extending the parametrization of the sources?

  1. The Role of Auxiliary Variables in Deterministic and Deterministic-Stochastic Spatial Models of Air Temperature in Poland

    NASA Astrophysics Data System (ADS)

    Szymanowski, Mariusz; Kryza, Maciej

    2017-02-01

    Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly correlated auxiliary variables does not improve the quality of the spatial model. The effects of introduction of certain variables into the model were not climatologically justified and were seen on maps as unexpected and undesired artefacts. The results confirm, in accordance with previous studies, that in the case of air temperature distribution, the spatial process is non-stationary; thus, the local GWR model performs better than the global MLR if they are specified using the same set of auxiliary variables. If only GWR residuals are autocorrelated, the geographically weighted regression-kriging (GWRK) model seems to be optimal for air temperature spatial interpolation.

  2. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  3. Spline curve matching with sparse knot sets

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2004-01-01

    This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...

  4. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  5. Spectroscopic ellipsometry data inversion using constrained splines and application to characterization of ZnO with various morphologies

    NASA Astrophysics Data System (ADS)

    Gilliot, Mickaël; Hadjadj, Aomar; Stchakovsky, Michel

    2017-11-01

    An original method of ellipsometric data inversion is proposed based on the use of constrained splines. The imaginary part of the dielectric function is represented by a series of splines, constructed with particular constraints on slopes at the node boundaries to avoid well-know oscillations of natural splines. The nodes are used as fit parameters. The real part is calculated using Kramers-Kronig relations. The inversion can be performed in successive inversion steps with increasing resolution. This method is used to characterize thin zinc oxide layers obtained by a sol-gel and spin-coating process, with a particular recipe yielding very thin layers presenting nano-porosity. Such layers have particular optical properties correlated with thickness, morphological and structural properties. The use of the constrained spline method is particularly efficient for such materials which may not be easily represented by standard dielectric function models.

  6. Quantitative monitoring of sucrose, reducing sugar and total sugar dynamics for phenotyping of water-deficit stress tolerance in rice through spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Das, Bappa; Sahoo, Rabi N.; Pargal, Sourabh; Krishna, Gopal; Verma, Rakesh; Chinnusamy, Viswanathan; Sehgal, Vinay K.; Gupta, Vinod K.; Dash, Sushanta K.; Swain, Padmini

    2018-03-01

    In the present investigation, the changes in sucrose, reducing and total sugar content due to water-deficit stress in rice leaves were modeled using visible, near infrared (VNIR) and shortwave infrared (SWIR) spectroscopy. The objectives of the study were to identify the best vegetation indices and suitable multivariate technique based on precise analysis of hyperspectral data (350 to 2500 nm) and sucrose, reducing sugar and total sugar content measured at different stress levels from 16 different rice genotypes. Spectral data analysis was done to identify suitable spectral indices and models for sucrose estimation. Novel spectral indices in near infrared (NIR) range viz. ratio spectral index (RSI) and normalised difference spectral indices (NDSI) sensitive to sucrose, reducing sugar and total sugar content were identified which were subsequently calibrated and validated. The RSI and NDSI models had R2 values of 0.65, 0.71 and 0.67; RPD values of 1.68, 1.95 and 1.66 for sucrose, reducing sugar and total sugar, respectively for validation dataset. Different multivariate spectral models such as artificial neural network (ANN), multivariate adaptive regression splines (MARS), multiple linear regression (MLR), partial least square regression (PLSR), random forest regression (RFR) and support vector machine regression (SVMR) were also evaluated. The best performing multivariate models for sucrose, reducing sugars and total sugars were found to be, MARS, ANN and MARS, respectively with respect to RPD values of 2.08, 2.44, and 1.93. Results indicated that VNIR and SWIR spectroscopy combined with multivariate calibration can be used as a reliable alternative to conventional methods for measurement of sucrose, reducing sugars and total sugars of rice under water-deficit stress as this technique is fast, economic, and noninvasive.

  7. Spherical demons: fast diffeomorphic landmark-free surface registration.

    PubMed

    Yeo, B T Thomas; Sabuncu, Mert R; Vercauteren, Tom; Ayache, Nicholas; Fischl, Bruce; Golland, Polina

    2010-03-01

    We present the Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizors for the modified Demons objective function can be efficiently approximated on the sphere using iterative smoothing. Based on one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast. The Spherical Demons algorithm can also be modified to register a given spherical image to a probabilistic atlas. We demonstrate two variants of the algorithm corresponding to warping the atlas or warping the subject. Registration of a cortical surface mesh to an atlas mesh, both with more than 160 k nodes requires less than 5 min when warping the atlas and less than 3 min when warping the subject on a Xeon 3.2 GHz single processor machine. This is comparable to the fastest nondiffeomorphic landmark-free surface registration algorithms. Furthermore, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different applications that use registration to transfer segmentation labels onto a new image 1) parcellation of in vivo cortical surfaces and 2) Brodmann area localization in ex vivo cortical surfaces.

  8. ASM Based Synthesis of Handwritten Arabic Text Pages

    PubMed Central

    Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059

  9. Enhancing Deep-Water Low-Resolution Gridded Bathymetry Using Single Image Super-Resolution

    NASA Astrophysics Data System (ADS)

    Elmore, P. A.; Nock, K.; Bonanno, D.; Smith, L.; Ferrini, V. L.; Petry, F. E.

    2017-12-01

    We present research to employ single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. Our numerical upscaling experiments of x15 upscaling of the GEBCO grid along three areas of the Eastern Pacific Ocean along mid-ocean ridge systems where we have these 100m gridded bathymetry data sets, which we accept as ground-truth. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Spline-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.

  10. On the geodetic applications of simultaneous range-differencing to LAGEOS

    NASA Technical Reports Server (NTRS)

    Pablis, E. C.

    1982-01-01

    The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.

  11. Speleothem stable isotope records for east-central Europe: resampling sedimentary proxy records to obtain evenly spaced time series with spectral guidance

    NASA Astrophysics Data System (ADS)

    Gábor Hatvani, István; Kern, Zoltán; Leél-Őssy, Szabolcs; Demény, Attila

    2018-01-01

    Uneven spacing is a common feature of sedimentary paleoclimate records, in many cases causing difficulties in the application of classical statistical and time series methods. Although special statistical tools do exist to assess unevenly spaced data directly, the transformation of such data into a temporally equidistant time series which may then be examined using commonly employed statistical tools remains, however, an unachieved goal. The present paper, therefore, introduces an approach to obtain evenly spaced time series (using cubic spline fitting) from unevenly spaced speleothem records with the application of a spectral guidance to avoid the spectral bias caused by interpolation and retain the original spectral characteristics of the data. The methodology was applied to stable carbon and oxygen isotope records derived from two stalagmites from the Baradla Cave (NE Hungary) dating back to the late 18th century. To show the benefit of the equally spaced records to climate studies, their coherence with climate parameters is explored using wavelet transform coherence and discussed. The obtained equally spaced time series are available at https://doi.org/10.1594/PANGAEA.875917.

  12. Nucleation of Super-Critical Carbon Dioxide in a Venturi Nozzle

    NASA Astrophysics Data System (ADS)

    Jarrahbashi, Dorrin; Pidaparti, Sandeep; Ranjan, Devesh

    2015-11-01

    The supercritical carbon dioxide (S-CO2) Brayton cycle combines the primary advantages of the ideal Brayton and Rankine cycles by utilizing CO2 above its critical pressure. In addition to single phase and small back work ratios, supercritical fluids offer other advantages, e.g. heat transfer augmentation and low specific volume. Pressure reduction at the entrance of the compressor may cause homogenous nucleation, vapor production, and collapse of bubbles due to operation near the saturation conditions. Transient behavior of the flow after nucleation may cause serious issues in operation of the cycle and affect the materials used in design. The flow of S-CO2 through a venturi nozzle near the critical point has been studied. A transient compressible 3D Navier-Stokes solver, coupled with continuity, and energy equation has been used. Developed FIT libraries based on a piecewise biquintic spline interpolation of Helmholtz energy have been integrated with OpenFOAM to model S-CO2 properties. The mass fraction of vapor created in the venturi has been calculated using homogeneous equilibrium model (HEM). The flow conditions that lead to nucleation have been investigated. The sensitivity of nucleation to the inlet pressure and temperature, flow rate, and venturi profile has been shown.

  13. Spherical Demons: Fast Diffeomorphic Landmark-Free Surface Registration

    PubMed Central

    Yeo, B.T. Thomas; Sabuncu, Mert R.; Vercauteren, Tom; Ayache, Nicholas; Fischl, Bruce; Golland, Polina

    2010-01-01

    We present the Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizors for the modified Demons objective function can be efficiently approximated on the sphere using iterative smoothing. Based on one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast. The Spherical Demons algorithm can also be modified to register a given spherical image to a probabilistic atlas. We demonstrate two variants of the algorithm corresponding to warping the atlas or warping the subject. Registration of a cortical surface mesh to an atlas mesh, both with more than 160k nodes requires less than 5 minutes when warping the atlas and less than 3 minutes when warping the subject on a Xeon 3.2GHz single processor machine. This is comparable to the fastest non-diffeomorphic landmark-free surface registration algorithms. Furthermore, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different applications that use registration to transfer segmentation labels onto a new image: (1) parcellation of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces. PMID:19709963

  14. Comparison of motion correction techniques applied to functional near-infrared spectroscopy data from children

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-Su; Arredondo, Maria M.; Gomba, Megan; Confer, Nicole; DaSilva, Alexandre F.; Johnson, Timothy D.; Shalinsky, Mark; Kovelman, Ioulia

    2015-12-01

    Motion artifacts are the most significant sources of noise in the context of pediatric brain imaging designs and data analyses, especially in applications of functional near-infrared spectroscopy (fNIRS), in which it can completely affect the quality of the data acquired. Different methods have been developed to correct motion artifacts in fNIRS data, but the relative effectiveness of these methods for data from child and infant subjects (which is often found to be significantly noisier than adult data) remains largely unexplored. The issue is further complicated by the heterogeneity of fNIRS data artifacts. We compared the efficacy of the six most prevalent motion artifact correction techniques with fNIRS data acquired from children participating in a language acquisition task, including wavelet, spline interpolation, principal component analysis, moving average (MA), correlation-based signal improvement, and combination of wavelet and MA. The evaluation of five predefined metrics suggests that the MA and wavelet methods yield the best outcomes. These findings elucidate the varied nature of fNIRS data artifacts and the efficacy of artifact correction methods with pediatric populations, as well as help inform both the theory and practice of optical brain imaging analysis.

  15. Optimization of Premix Powders for Tableting Use.

    PubMed

    Todo, Hiroaki; Sato, Kazuki; Takayama, Kozo; Sugibayashi, Kenji

    2018-05-08

    Direct compression is a popular choice as it provides the simplest way to prepare the tablet. It can be easily adopted when the active pharmaceutical ingredient (API) is unstable in water or to thermal drying. An optimal formulation of preliminary mixed powders (premix powders) is beneficial if prepared in advance for tableting use. The aim of this study was to find the optimal formulation of the premix powders composed of lactose (LAC), cornstarch (CS), and microcrystalline cellulose (MCC) by using statistical techniques. Based on the "Quality by Design" concept, a (3,3)-simplex lattice design consisting of three components, LAC, CS, and MCC was employed to prepare the model premix powders. Response surface method incorporating a thin-plate spline interpolation (RSM-S) was applied for estimation of the optimum premix powders for tableting use. The effect of tablet shape identified by the surface curvature on the optimization was investigated. The optimum premix powder was effective when the premix was applied to a small quantity of API, although the function of premix was limited in the case of the formulation of large amount of API. Statistical techniques are valuable to exploit new functions of well-known materials such as LAC, CS, and MCC.

  16. ASM Based Synthesis of Handwritten Arabic Text Pages.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  17. Adaptive finite element modelling of three-dimensional magnetotelluric fields in general anisotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Xu, Zhenhuan; Li, Yuguo

    2018-04-01

    We present a goal-oriented adaptive finite element (FE) modelling algorithm for 3-D magnetotelluric fields in generally anisotropic conductivity media. The model consists of a background layered structure, containing anisotropic blocks. Each block and layer might be anisotropic by assigning to them 3 × 3 conductivity tensors. The second-order partial differential equations are solved using the adaptive finite element method (FEM). The computational domain is subdivided into unstructured tetrahedral elements, which allow for complex geometries including bathymetry and dipping interfaces. The grid refinement process is guided by a global posteriori error estimator and is performed iteratively. The system of linear FE equations for electric field E is solved with a direct solver MUMPS. Then the magnetic field H can be found, in which the required derivatives are computed numerically using cubic spline interpolation. The 3-D FE algorithm has been validated by comparisons with both the 3-D finite-difference solution and 2-D FE results. Two model types are used to demonstrate the effects of anisotropy upon 3-D magnetotelluric responses: horizontal and dipping anisotropy. Finally, a 3D sea hill model is modelled to study the effect of oblique interfaces and the dipping anisotropy.

  18. General Flow-Solver Code for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Dorney, Daniel; Sondak, Douglas

    2006-01-01

    Phantom is a computer code intended primarily for real-fluid turbomachinery problems. It is based on Corsair, an ideal-gas turbomachinery code, developed by the same authors, which evolved from the ROTOR codes from NASA Ames. Phantom is applicable to real and ideal fluids, both compressible and incompressible, flowing at subsonic, transonic, and supersonic speeds. It utilizes structured, overset, O- and H-type zonal grids to discretize flow fields and represent relative motions of components. Values on grid boundaries are updated at each time step by bilinear interpolation from adjacent grids. Inviscid fluxes are calculated to third-order spatial accuracy using Roe s scheme. Viscous fluxes are calculated using second-order-accurate central differences. The code is second-order accurate in time. Turbulence is represented by a modified Baldwin-Lomax algebraic model. The code offers two options for determining properties of fluids: One is based on equations of state, thermodynamic departure functions, and corresponding state principles. The other, which is more efficient, is based on splines generated from tables of properties of real fluids. Phantom currently contains fluid-property routines for water, hydrogen, oxygen, nitrogen, kerosene, methane, and carbon monoxide as well as ideal gases.

  19. Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.

    PubMed

    Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood

    2016-01-01

    Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.

  20. Transfer of mechanical energy during the shot put.

    PubMed

    Błażkiewicz, Michalina; Łysoń, Barbara; Chmielewski, Adam; Wit, Andrzej

    2016-09-01

    The aim of this study was to analyse transfer of mechanical energy between body segments during the glide shot put. A group of eight elite throwers from the Polish National Team was analysed in the study. Motion analysis of each throw was recorded using an optoelectronic Vicon system composed of nine infrared camcorders and Kistler force plates. The power and energy were computed for the phase of final acceleration of the glide shot put. The data were normalized with respect to time using the algorithm of the fifth order spline and their values were interpolated with respect to the percentage of total time, assuming that the time of the final weight acceleration movement was different for each putter. Statistically significant transfer was found in the study group between the following segments: Right Knee - Right Hip (p = 0.0035), Left Hip - Torso (p = 0.0201), Torso - Right Shoulder (p = 0.0122) and Right Elbow - Right Wrist (p = 0.0001). Furthermore, the results of cluster analysis showed that the kinetic chain used during the final shot acceleration movement had two different models. Differences between the groups were revealed mainly in the energy generated by the hips and trunk.

  1. A Semi-parametric Multivariate Gap-filling Model for Eddy Covariance Latent Heat Flux

    NASA Astrophysics Data System (ADS)

    Li, M.; Chen, Y.

    2010-12-01

    Quantitative descriptions of latent heat fluxes are important to study the water and energy exchanges between terrestrial ecosystems and the atmosphere. The eddy covariance approaches have been recognized as the most reliable technique for measuring surface fluxes over time scales ranging from hours to years. However, unfavorable micrometeorological conditions, instrument failures, and applicable measurement limitations may cause inevitable flux gaps in time series data. Development and application of suitable gap-filling techniques are crucial to estimate long term fluxes. In this study, a semi-parametric multivariate gap-filling model was developed to fill latent heat flux gaps for eddy covariance measurements. Our approach combines the advantages of a multivariate statistical analysis (principal component analysis, PCA) and a nonlinear interpolation technique (K-nearest-neighbors, KNN). The PCA method was first used to resolve the multicollinearity relationships among various hydrometeorological factors, such as radiation, soil moisture deficit, LAI, and wind speed. The KNN method was then applied as a nonlinear interpolation tool to estimate the flux gaps as the weighted sum latent heat fluxes with the K-nearest distances in the PCs’ domain. Two years, 2008 and 2009, of eddy covariance and hydrometeorological data from a subtropical mixed evergreen forest (the Lien-Hua-Chih Site) were collected to calibrate and validate the proposed approach with artificial gaps after standard QC/QA procedures. The optimal K values and weighting factors were determined by the maximum likelihood test. The results of gap-filled latent heat fluxes conclude that developed model successful preserving energy balances of daily, monthly, and yearly time scales. Annual amounts of evapotranspiration from this study forest were 747 mm and 708 mm for 2008 and 2009, respectively. Nocturnal evapotranspiration was estimated with filled gaps and results are comparable with other studies. Seasonal and daily variability of latent heat fluxes were also discussed.

  2. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  3. Ship Detection and Measurement of Ship Motion by Multi-Aperture Synthetic Aperture Radar

    DTIC Science & Technology

    2014-06-01

    Reconstructed periodic components of the Doppler histories shown in Fig. 27, (b) splined harmonic component amplitudes as a function of range...78 Figure 42: (a) Reconstructed periodic components of the Doppler histories shown in Figure 30, (b) Splined amplitudes of the...Figure 29 (b) Splined amplitudes of the harmonic components. ............................................ 79 Figure 44: Ship focusing by standard

  4. Interactive Exploration of Big Scientific Data: New Representations and Techniques.

    PubMed

    Hjelmervik, Jon M; Barrowclough, Oliver J D

    2016-01-01

    Although splines have been in popular use in CAD for more than half a century, spline research is still an active field, driven by the challenges we are facing today within isogeometric analysis and big data. Splines are likely to play a vital future role in enabling effective big data exploration techniques in 3D, 4D, and beyond.

  5. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    NASA Astrophysics Data System (ADS)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  6. Foot anthropometry and morphology phenomena.

    PubMed

    Agić, Ante; Nikolić, Vasilije; Mijović, Budimir

    2006-12-01

    Foot structure description is important for many reasons. The foot anthropometric morphology phenomena are analyzed together with hidden biomechanical functionality in order to fully characterize foot structure and function. For younger Croatian population the scatter data of the individual foot variables were interpolated by multivariate statistics. Foot structure descriptors are influenced by many factors, as a style of life, race, climate, and things of the great importance in human society. Dominant descriptors are determined by principal component analysis. Some practical recommendation and conclusion for medical, sportswear and footwear practice are highlighted.

  7. Optimal Number and Allocation of Data Collection Points for Linear Spline Growth Curve Modeling: A Search for Efficient Designs

    ERIC Educational Resources Information Center

    Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.

    2017-01-01

    Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…

  8. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    NASA Astrophysics Data System (ADS)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  9. Spline curve matching with sparse knot sets: applications to deformable shape detection and recognition

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2003-01-01

    Splines can be used to approximate noisy data with a few control points. This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of...

  10. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  11. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    NASA Astrophysics Data System (ADS)

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.

  12. The Spatial Structure of Planform Migration - Curvature Relation of Meandering Rivers

    NASA Astrophysics Data System (ADS)

    Guneralp, I.; Rhoads, B. L.

    2005-12-01

    Planform dynamics of meandering rivers have been of fundamental interest to fluvial geomorphologists and engineers because of the intriguing complexity of these dynamics, the role of planform change in floodplain development and landscape evolution, and the economic and social consequences of bank erosion and channel migration. Improved understanding of the complex spatial structure of planform change and capacity to predict these changes are important for effective stream management, engineering and restoration. The planform characteristics of a meandering river channel are integral to its planform dynamics. Active meandering rivers continually change their positions and shapes as a consequence of hydraulic forces exerted on the channel banks and bed, but as the banks and bed change through sediment transport, so do the hydraulic forces. Thus far, this complex feedback between form and process is incompletely understood, despite the fact that the characteristics and the dynamics of meandering rivers have been studied extensively. Current theoretical models aimed at predicting planform dynamics relate rates of meander migration to local and upstream planform curvature where weighting of the influence of curvature on migration rate decays exponentially over distance. This theoretical relation, however, has not been rigorously evaluated empirically. Furthermore, although models based on exponential-weighting of curvature effects yield fairly realistic predictions of meander migration, such models are incapable of reproducing complex forms of bend development, such as double heading or compound looping. This study presents the development of a new methodology based on parametric cubic spline interpolation for the characterization of channel planform and the planform curvature of meandering rivers. The use of continuous mathematical functions overcomes the reliance on bend-averaged values or piece-wise discrete approximations of planform curvature - a major limitation of previous studies. Continuous curvature series can be related to measured rates of lateral migration to explore empirically the relationship between spatially extended curvature and local bend migration. The methodology is applied to a study reach along a highly sinuous section of the Embarras River in Illinois, USA, which contains double-headed asymmetrical loops. To identify patterns of channel planform and rates of lateral migration for a study reach along Embarrass River in central Illinois, geographical information systems analysis of historical aerial photography over a period from 1936 to 1998 was conducted. Results indicate that parametric cubic spline interpolation provides excellent characterization of the complex planforms and planform curvatures of meandering rivers. The findings also indicate that the spatial structure of migration rate-curvature relation may be more complex than a simple exponential distance-decay function. The study represents a first step toward unraveling the spatial structure of planform evolution of meandering rivers and for developing models of planform dynamics that accurately relate spatially extended patterns of channel curvature to local rates of lateral migration. Such knowledge is vital for improving the capacity to accurately predict planform change of meandering rivers.

  13. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz; Boocock, Mark G.

    2016-03-15

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhancesmore » volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume estimation of a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Furthermore, the performance of the segmentation method used for the extraction of the femoral, tibial, and patellar cartilages is assessed with a Dice similarity coefficient, sensitivity, and specificity measure providing high agreement to manual segmentation. Conclusions: The CI-RBF method provides a fast, accurate, and robust 3D model reconstruction that matches Carr’s RBF method, 3D DOCTOR, and a manual benchmark method in accuracy and significantly improves upon Carr’s RBF method in data requirement and computational speed. In addition, the visualization tool has been designed to quickly segment MR images requiring only four mouse clicks per MR image slice.« less

  14. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  15. Development of quadrilateral spline thin plate elements using the B-net method

    NASA Astrophysics Data System (ADS)

    Chen, Juan; Li, Chong-Jun

    2013-08-01

    The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.

  16. Spatial models to predict ash pH and Electrical Conductivity distribution after a grassland fire in Lithuania

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Cerda, Artemi; Misiūnė, Ieva

    2015-04-01

    Fire mineralizes the organic matter, increasing the pH level and the amount of dissolved ions (Pereira et al., 2014). The degree of mineralization depends among other factors on fire temperature, burned specie, moisture content, and contact time. The impact of wildland fires it is assessed using the fire severity, an index used in the absence of direct measures (e.g temperature), important to estimate the fire effects in the ecosystems. This impact is observed through the loss of soil organic matter, crown volume, twig diameter, ash colour, among others (Keeley et al., 2009). The effects of fire are highly variable, especially at short spatial scales (Pereira et al., in press), due the different fuel conditions (e.g. moisture, specie distribution, flammability, connectivity, arrangement, etc). This variability poses important challenges to identify the best spatial predictor and have the most accurate spatial visualization of the data. Considering this, the test of several interpolation methods it is assumed to be relevant to have the most reliable map. The aims of this work are I) study the ash pH and Electrical Conductivity (EC) after a grassland fire according to ash colour and II) test several interpolation methods in order to identify the best spatial predictor of pH and EC distribution. The study area is located near Vilnius at 54.42° N and 25.26°E and 154 ma.s.l. After the fire it was designed a plot with a 27 x 9 m space grid. Samples were taken every 3 meters for a total of 40 (Pereira et al., 2013). Ash color was classified according to Úbeda et al. (2009). Ash pH and EC laboratory analysis were carried out according to Pereira et al. (2014). Previous to data comparison and modelling, normality and homogeneity were assessed with the Shapiro-wilk and Levene test. pH data respected the normality and homogeneity, while EC only followed the Gaussian distribution and the homogeneity criteria after a logarithmic transformation. Data spatial correlation was calculated with the Global Moran's I Index. In order to identify the best interpolator, we tested several well known techniques as inverse distance to a power (IDP), with the power of 1, 2, 3, 4 and 5, local polynomial (LP) with the power of 1 (LP1), 2 (LP2) and 3 (LP3), spline with tension (SPT), completely regularized spline (CRS), multiquadratic (MTQ), inverse multiquadratic (IMTQ) thin plate spline (TPS) and ordinary kriging. The best interpolator was the one with the lowest Root mean square error (RMSE). The results shown that on average ash pH was 8.01 (±0.20) and EC (1408± 513.51µm cm3). The coefficient of correlation between both variables was 0.34, p<0.05. Black ash had a significantly higher pH (F=6.29, p<0.05) and EC (F=5.25, p<0.05) than dark grey ash. According to Moran's I index, pH data was significantly (p<0.05) dispersed, while EC had a random pattern. The best spatial predictor for pH was IDW1 (RMSE=0.210), and for EC IMTQ (RMSE=0.141). In both cases the least accurate technique was TPS. pH data did not showed a specific spatial pattern and some high values are very close to high values which shows a great local spatial variability, mainly observed in the northern part of the plot. In relation to EC, the high values were identified in the central part of the plot. In conclusion it was observed that ash pH and EC were different according to fire severity (ash color) and data distribution has a different spatial pattern, despite the significant correlation. pH and EC had different spatial impacts on soil properties in the immediate period after the fire. Acknowledgments POSTFIRE (Soil quality, erosion control and plant cover recovery under different post-fire management scenarios, CGL2013-47862-C2-1-R), funded by the Spanish Ministry of Economy and Competitiveness; Fuegored; RECARE (Preventing and Remediating Degradation of Soils in Europe Through Land Care, FP7-ENV-2013-TWO STAGE), funded by the European Commission; and for the COST action ES1306 (Connecting European connectivity research). References Keeley, J.E. (2009) Fire intensity, fire severity and burn severity: a brief review and suggested usage. International Journal of Wildland Fire. 18, 116-126. Pereira, P., Úbeda, X., Martin, D., Mataix-Solera, J., Cerdà, A., Burguet, M. (2014) Wildfire effects on extractable elements in ash from a Pinus pinaster forest in Portugal. Hydrological Processes, 28, 3681-3690. Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J. Arcenegui, V., Zavala, L. Modelling the impacts of wildfire on ash thickness in a short-term period. Land Degradation and Development, (In Press), DOI: 10.1002/ldr.2195 Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J., Jordan, A. Burguet, M. (2013) Spatial models for monitoring the spatio-temporal evolution of ashes after fire - a case study of a burnt grassland in Lithuania, Solid Earth, 4, 153-165. Úbeda, X., Pereira, P., Outeiro, L., Martin, D. (2009) Effects of fire temperature on the physical and chemical characteristics of the ash from two plots of cork oak (Quercus suber). Land Degradation and Development, 20(6), 589-608.

  17. Isogeometric Bézier dual mortaring: Refineable higher-order spline dual bases and weakly continuous geometry

    NASA Astrophysics Data System (ADS)

    Zou, Z.; Scott, M. A.; Borden, M. J.; Thomas, D. C.; Dornisch, W.; Brivadis, E.

    2018-05-01

    In this paper we develop the isogeometric B\\'ezier dual mortar method. It is based on B\\'ezier extraction and projection and is applicable to any spline space which can be represented in B\\'ezier form (i.e., NURBS, T-splines, LR-splines, etc.). The approach weakly enforces the continuity of the solution at patch interfaces and the error can be adaptively controlled by leveraging the refineability of the underlying dual spline basis without introducing any additional degrees of freedom. We also develop weakly continuous geometry as a particular application of isogeometric B\\'ezier dual mortaring. Weakly continuous geometry is a geometry description where the weak continuity constraints are built into properly modified B\\'ezier extraction operators. As a result, multi-patch models can be processed in a solver directly without having to employ a mortaring solution strategy. We demonstrate the utility of the approach on several challenging benchmark problems. Keywords: Mortar methods, Isogeometric analysis, B\\'ezier extraction, B\\'ezier projection

  18. A Novel Model to Simulate Flexural Complements in Compliant Sensor Systems

    PubMed Central

    Tang, Hongyan; Zhang, Dan; Guo, Sheng; Qu, Haibo

    2018-01-01

    The main challenge in analyzing compliant sensor systems is how to calculate the large deformation of flexural complements. Our study proposes a new model that is called the spline pseudo-rigid-body model (spline PRBM). It combines dynamic spline and the pseudo-rigid-body model (PRBM) to simulate the flexural complements. The axial deformations of flexural complements are modeled by using dynamic spline. This makes it possible to consider the nonlinear compliance of the system using four control points. Three rigid rods connected by two revolute (R) pins with two torsion springs replace the three lines connecting the four control points. The kinematic behavior of the system is described using Lagrange equations. Both the optimization and the numerical fitting methods are used for resolving the characteristic parameters of the new model. An example is given of a compliant mechanism to modify the accuracy of the model. The spline PRBM is important in expanding the applications of the PRBM to the design and simulation of flexural force sensors. PMID:29596377

  19. Imaging Freeform Optical Systems Designed with NURBS Surfaces

    DTIC Science & Technology

    2015-12-01

    reflective, anastigmat 1 Introduction The imaging freeform optical systems described here are designed using non-uniform rational basis -spline (NURBS...from piecewise splines. Figure 1 shows a third degree NURBS surface which is formed from cubic basis splines. The surface is defined by the set of...with mathematical details covered by Piegl and Tiller7. Compare this with Gaussian basis functions8 where it is challenging to provide smooth

  20. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yi; Keller, Jonathan; Errichello, Robert

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliabilitymore » using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.« less

Top