Tensorial Basis Spline Collocation Method for Poisson's Equation
NASA Astrophysics Data System (ADS)
Plagne, Laurent; Berthou, Jean-Yves
2000-01-01
This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.
Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's
NASA Technical Reports Server (NTRS)
Cai, Wei; Wang, Jian-Zhong
1993-01-01
We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.
NASA Technical Reports Server (NTRS)
Rummel, R.; Sjoeberg, L.; Rapp, R. H.
1978-01-01
A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.
Theory, computation, and application of exponential splines
NASA Technical Reports Server (NTRS)
Mccartin, B. J.
1981-01-01
A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.
Rational-spline approximation with automatic tension adjustment
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Kerr, P. A.
1984-01-01
An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.
Inversion of the strain-life and strain-stress relationships for use in metal fatigue analysis
NASA Technical Reports Server (NTRS)
Manson, S. S.
1979-01-01
The paper presents closed-form solutions (collocation method and spline-function method) for the constants of the cyclic fatigue life equation so that they can be easily incorporated into cumulative damage analysis. The collocation method involves conformity with the experimental curve at specific life values. The spline-function method is such that the basic life relation is expressed as a two-part function, one applicable at strains above the transition strain (strain at intersection of elastic and plastic lines), the other below. An illustrative example is treated by both methods. It is shown that while the collocation representation has the advantage of simplicity of form, the spline-function representation can be made more accurate over a wider life range, and is simpler to use.
NASA Astrophysics Data System (ADS)
Shen, Xiang; Liu, Bin; Li, Qing-Quan
2017-03-01
The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2009-01-01
This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
Monotonicity preserving splines using rational cubic Timmer interpolation
NASA Astrophysics Data System (ADS)
Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md
2017-08-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
2013-02-06
high order and smoothness. Consequently, the use of IGA for col- location suggests itself, since spline functions such as NURBS or T-splines can be...for the development of higher-order accurate time integration schemes due to the convergence of the high modes in the eigenspectrum [46] as well as...flows [19, 20, 49–52]. Due to their maximum smoothness, B-splines exhibit a high resolution power, which allows the representation of a broad range
Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines
NASA Astrophysics Data System (ADS)
Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya
2017-11-01
Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1.093 pixels, which were respectively 42.1%, 48.3%, and 54.8% better than those achieved before the nonlinear bias compensation.
NASA Astrophysics Data System (ADS)
Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui
2015-03-01
A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.
NASA Astrophysics Data System (ADS)
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassab, A.J.; Pollard, J.E.
An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less
NASA Astrophysics Data System (ADS)
Reuter, Bryan; Oliver, Todd; Lee, M. K.; Moser, Robert
2017-11-01
We present an algorithm for a Direct Numerical Simulation of the variable-density Navier-Stokes equations based on the velocity-vorticity approach introduced by Kim, Moin, and Moser (1987). In the current work, a Helmholtz decomposition of the momentum is performed. Evolution equations for the curl and the Laplacian of the divergence-free portion are formulated by manipulation of the momentum equations and the curl-free portion is reconstructed by enforcing continuity. The solution is expanded in Fourier bases in the homogeneous directions and B-Spline bases in the inhomogeneous directions. Discrete equations are obtained through a mixed Fourier-Galerkin and collocation weighted residual method. The scheme is designed such that the numerical solution conserves mass locally and globally by ensuring the discrete divergence projection is exact through the use of higher order splines in the inhomogeneous directions. The formulation is tested on multiple variable-density flow problems.
Spline-based procedures for dose-finding studies with active control
Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim
2015-01-01
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931
Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models
NASA Astrophysics Data System (ADS)
Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo
2014-04-01
We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.
A smoothing algorithm using cubic spline functions
NASA Technical Reports Server (NTRS)
Smith, R. E., Jr.; Price, J. M.; Howser, L. M.
1974-01-01
Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.
A cubic spline approximation for problems in fluid mechanics
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Graves, R. A., Jr.
1975-01-01
A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.
Weighted spline based integration for reconstruction of freeform wavefront.
Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra
2018-02-10
In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.
Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.
de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2008-01-01
The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).
Weighted cubic and biharmonic splines
NASA Astrophysics Data System (ADS)
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
Trajectory control of an articulated robot with a parallel drive arm based on splines under tension
NASA Astrophysics Data System (ADS)
Yi, Seung-Jong
Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and motors to produce combined arc and straight-line motion. The simulation and experiment show interesting results by demonstrating smooth motion in both acceleration and jerk and significant improvements of positioning accuracy in trajectory planning.
A collocation-shooting method for solving fractional boundary value problems
NASA Astrophysics Data System (ADS)
Al-Mdallal, Qasem M.; Syam, Muhammed I.; Anwar, M. N.
2010-12-01
In this paper, we discuss the numerical solution of special class of fractional boundary value problems of order 2. The method of solution is based on a conjugating collocation and spline analysis combined with shooting method. A theoretical analysis about the existence and uniqueness of exact solution for the present class is proven. Two examples involving Bagley-Torvik equation subject to boundary conditions are also presented; numerical results illustrate the accuracy of the present scheme.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
NASA Astrophysics Data System (ADS)
Hamm, L. L.; Vanbrunt, V.
1982-08-01
The numerical solution to the ordinary differential equation which describes the high-pressure vapor-liquid equilibria of a binary system where one of the components is supercritical and exists as a noncondensable gas in the pure state is considered with emphasis on the implicit Runge-Kuta and orthogonal collocation methods. Some preliminary results indicate that the implicit Runge-Kutta method is superior. Due to the extreme nonlinearity of thermodynamic properties in the region near the critical locus, and extended cubic spline fitting technique is devised for correlating the P-x data. The least-squares criterion is employed in smoothing the experimental data. The technique could easily be applied to any thermodynamic data by changing the endpoint requirements. The volumetric behavior of the systems must be given or predicted in order to perform thermodynamic consistency tests. A general procedure is developed for predicting the volumetric behavior required and some indication as to the expected limit of accuracy is given.
A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid
NASA Astrophysics Data System (ADS)
Sulaimanov, Z. M.; Shumilov, B. M.
2017-10-01
For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C; Adcock, A; Azevedo, S
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.
Curvelet-domain multiple matching method combined with cubic B-spline function
NASA Astrophysics Data System (ADS)
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data
NASA Astrophysics Data System (ADS)
Ibrahim, Noor Akma; Suliadi
2010-11-01
In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.
Dauguet, Julien; Bock, Davi; Reid, R Clay; Warfield, Simon K
2007-01-01
3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.
Cubic spline numerical solution of an ablation problem with convective backface cooling
NASA Astrophysics Data System (ADS)
Lin, S.; Wang, P.; Kahawita, R.
1984-08-01
An implicit numerical technique using cubic splines is presented for solving an ablation problem on a thin wall with convective cooling. A non-uniform computational mesh with 6 grid points has been used for the numerical integration. The method has been found to be computationally efficient, providing for the care under consideration of an overall error of about 1 percent. The results obtained indicate that the convective cooling is an important factor in reducing the ablation thickness.
4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties
NASA Astrophysics Data System (ADS)
Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.
2018-05-01
4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Imaging Freeform Optical Systems Designed with NURBS Surfaces
2015-12-01
reflective, anastigmat 1 Introduction The imaging freeform optical systems described here are designed using non-uniform rational basis -spline (NURBS...from piecewise splines. Figure 1 shows a third degree NURBS surface which is formed from cubic basis splines. The surface is defined by the set of...with mathematical details covered by Piegl and Tiller7. Compare this with Gaussian basis functions8 where it is challenging to provide smooth
Sim, K S; Kiani, M A; Nia, M E; Tso, C P
2014-01-01
A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohammed A.
2014-09-01
In this paper, we propose an efficient spectral collocation algorithm to solve numerically wave type equations subject to initial, boundary and non-local conservation conditions. The shifted Jacobi pseudospectral approximation is investigated for the discretization of the spatial variable of such equations. It possesses spectral accuracy in the spatial variable. The shifted Jacobi-Gauss-Lobatto (SJ-GL) quadrature rule is established for treating the non-local conservation conditions, and then the problem with its initial and non-local boundary conditions are reduced to a system of second-order ordinary differential equations in temporal variable. This system is solved by two-stage forth-order A-stable implicit RK scheme. Five numerical examples with comparisons are given. The computational results demonstrate that the proposed algorithm is more accurate than finite difference method, method of lines and spline collocation approach
NASA Astrophysics Data System (ADS)
Ophaug, Vegard; Gerlach, Christian
2017-11-01
This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.
Time Varying Compensator Design for Reconfigurable Structures Using Non-Collocated Feedback
NASA Technical Reports Server (NTRS)
Scott, Michael A.
1996-01-01
Analysis and synthesis tools are developed to improved the dynamic performance of reconfigurable nonminimum phase, nonstrictly positive real-time variant systems. A novel Spline Varying Optimal (SVO) controller is developed for the kinematic nonlinear system. There are several advantages to using the SVO controller, in which the spline function approximates the system model, observer, and controller gain. They are: The spline function approximation is simply connected, thus the SVO controller is more continuous than traditional gain scheduled controllers when implemented on a time varying plant; ft is easier for real-time implementations in storage and computational effort; where system identification is required, the spline function requires fewer experiments, namely four experiments; and initial startup estimator transients are eliminated. The SVO compensator was evaluated on a high fidelity simulation of the Shuttle Remote Manipulator System. The SVO controller demonstrated significant improvement over the present arm performance: (1) Damping level was improved by a factor of 3; and (2) Peak joint torque was reduced by a factor of 2 following Shuttle thruster firings.
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
NASA Technical Reports Server (NTRS)
Aminpour, Mohammad
1995-01-01
The work reported here pertains only to the first year of research for a three year proposal period. As a prelude to this two dimensional interface element, the one dimensional element was tested and errors were discovered in the code for built-up structures and curved interfaces. These errors were corrected and the benchmark Boeing composite crown panel was analyzed successfully. A study of various splines led to the conclusion that cubic B-splines best suit this interface element application. A least squares approach combined with cubic B-splines was constructed to make a smooth function from the noisy data obtained with random error in the coordinate data points of the Boeing crown panel analysis. Preliminary investigations for the formulation of discontinuous 2-D shell and 3-D solid elements were conducted.
Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.
Oliveira, Francisco P M; Tavares, João Manuel R S
2013-03-01
This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p < 0.001) than the one obtained using the best solution proposed in our previous work. When applied to align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p < 0.001). The consequences of the temporal alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.
Characterizing vaccine-associated risks using cubic smoothing splines.
Brookhart, M Alan; Walker, Alexander M; Lu, Yun; Polakowski, Laura; Li, Jie; Paeglow, Corrie; Puenpatom, Tosmai; Izurieta, Hector; Daniel, Gregory W
2012-11-15
Estimating risks associated with the use of childhood vaccines is challenging. The authors propose a new approach for studying short-term vaccine-related risks. The method uses a cubic smoothing spline to flexibly estimate the daily risk of an event after vaccination. The predicted incidence rates from the spline regression are then compared with the expected rates under a log-linear trend that excludes the days surrounding vaccination. The 2 models are then used to estimate the excess cumulative incidence attributable to the vaccination during the 42-day period after vaccination. Confidence intervals are obtained using a model-based bootstrap procedure. The method is applied to a study of known effects (positive controls) and expected noneffects (negative controls) of the measles, mumps, and rubella and measles, mumps, rubella, and varicella vaccines among children who are 1 year of age. The splines revealed well-resolved spikes in fever, rash, and adenopathy diagnoses, with the maximum incidence occurring between 9 and 11 days after vaccination. For the negative control outcomes, the spline model yielded a predicted incidence more consistent with the modeled day-specific risks, although there was evidence of increased risk of diagnoses of congenital malformations after vaccination, possibly because of a "provider visit effect." The proposed approach may be useful for vaccine safety surveillance.
A grid spacing control technique for algebraic grid generation methods
NASA Technical Reports Server (NTRS)
Smith, R. E.; Kudlinski, R. A.; Everton, E. L.
1982-01-01
A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.
NASA Astrophysics Data System (ADS)
Gutowski, Marek W.
1992-12-01
Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, T; Koo, T
Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less
Numerical solution of system of boundary value problems using B-spline with free parameter
NASA Astrophysics Data System (ADS)
Gupta, Yogesh
2017-01-01
This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.
Numerical solution of the Black-Scholes equation using cubic spline wavelets
NASA Astrophysics Data System (ADS)
Černá, Dana
2016-12-01
The Black-Scholes equation is used in financial mathematics for computation of market values of options at a given time. We use the θ-scheme for time discretization and an adaptive scheme based on wavelets for discretization on the given time level. Advantages of the proposed method are small number of degrees of freedom, high-order accuracy with respect to variables representing prices and relatively small number of iterations needed to resolve the problem with a desired accuracy. We use several cubic spline wavelet and multi-wavelet bases and discuss their advantages and disadvantages. We also compare an isotropic and anisotropic approach. Numerical experiments are presented for the two-dimensional Black-Scholes equation.
Directly manipulated free-form deformation image registration.
Tustison, Nicholas J; Avants, Brian B; Gee, James C
2009-03-01
Previous contributions to both the research and open source software communities detailed a generalization of a fast scalar field fitting technique for cubic B-splines based on the work originally proposed by Lee . One advantage of our proposed generalized B-spline fitting approach is its immediate application to a class of nonrigid registration techniques frequently employed in medical image analysis. Specifically, these registration techniques fall under the rubric of free-form deformation (FFD) approaches in which the object to be registered is embedded within a B-spline object. The deformation of the B-spline object describes the transformation of the image registration solution. Representative of this class of techniques, and often cited within the relevant community, is the formulation of Rueckert who employed cubic splines with normalized mutual information to study breast deformation. Similar techniques from various groups provided incremental novelty in the form of disparate explicit regularization terms, as well as the employment of various image metrics and tailored optimization methods. For several algorithms, the underlying gradient-based optimization retained the essential characteristics of Rueckert's original contribution. The contribution which we provide in this paper is two-fold: 1) the observation that the generic FFD framework is intrinsically susceptible to problematic energy topographies and 2) that the standard gradient used in FFD image registration can be modified to a well-understood preconditioned form which substantially improves performance. This is demonstrated with theoretical discussion and comparative evaluation experimentation.
Estimating seasonal evapotranspiration from temporal satellite images
Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.
2012-01-01
Estimating seasonal evapotranspiration (ET) has many applications in water resources planning and management, including hydrological and ecological modeling. Availability of satellite remote sensing images is limited due to repeat cycle of satellite or cloud cover. This study was conducted to determine the suitability of different methods namely cubic spline, fixed, and linear for estimating seasonal ET from temporal remotely sensed images. Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model in conjunction with the wet METRIC (wMETRIC), a modified version of the METRIC model, was used to estimate ET on the days of satellite overpass using eight Landsat images during the 2001 crop growing season in Midwest USA. The model-estimated daily ET was in good agreement (R2 = 0.91) with the eddy covariance tower-measured daily ET. The standard error of daily ET was 0.6 mm (20%) at three validation sites in Nebraska, USA. There was no statistically significant difference (P > 0.05) among the cubic spline, fixed, and linear methods for computing seasonal (July–December) ET from temporal ET estimates. Overall, the cubic spline resulted in the lowest standard error of 6 mm (1.67%) for seasonal ET. However, further testing of this method for multiple years is necessary to determine its suitability.
Baldi, F; Alencar, M M; Albuquerque, L G
2010-12-01
The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.
The computation of Laplacian smoothing splines with examples
NASA Technical Reports Server (NTRS)
Wendelberger, J. G.
1982-01-01
Laplacian smoothing splines (LSS) are presented as generalizations of graduation, cubic and thin plate splines. The method of generalized cross validation (GCV) to choose the smoothing parameter is described. The GCV is used in the algorithm for the computation of LSS's. An outline of a computer program which implements this algorithm is presented along with a description of the use of the program. Examples in one, two and three dimensions demonstrate how to obtain estimates of function values with confidence intervals and estimates of first and second derivatives. Probability plots are used as a diagnostic tool to check for model inadequacy.
A collocation--Galerkin finite element model of cardiac action potential propagation.
Rogers, J M; McCulloch, A D
1994-08-01
A new computational method was developed for modeling the effects of the geometric complexity, nonuniform muscle fiber orientation, and material inhomogeneity of the ventricular wall on cardiac impulse propagation. The method was used to solve a modification to the FitzHugh-Nagumo system of equations. The geometry, local muscle fiber orientation, and material parameters of the domain were defined using linear Lagrange or cubic Hermite finite element interpolation. Spatial variations of time-dependent excitation and recovery variables were approximated using cubic Hermite finite element interpolation, and the governing finite element equations were assembled using the collocation method. To overcome the deficiencies of conventional collocation methods on irregular domains, Galerkin equations for the no-flux boundary conditions were used instead of collocation equations for the boundary degrees-of-freedom. The resulting system was evolved using an adaptive Runge-Kutta method. Converged two-dimensional simulations of normal propagation showed that this method requires less CPU time than a traditional finite difference discretization. The model also reproduced several other physiologic phenomena known to be important in arrhythmogenesis including: Wenckebach periodicity, slowed propagation and unidirectional block due to wavefront curvature, reentry around a fixed obstacle, and spiral wave reentry. In a new result, we observed wavespeed variations and block due to nonuniform muscle fiber orientation. The findings suggest that the finite element method is suitable for studying normal and pathological cardiac activation and has significant advantages over existing techniques.
Control theory and splines, applied to signature storage
NASA Technical Reports Server (NTRS)
Enqvist, Per
1994-01-01
In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.
Enhancement of panoramic image resolution based on swift interpolation of Bezier surface
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Yang, Guo-guang; Bai, Jian
2007-01-01
Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.
Chang, Nai-Fu; Chiang, Cheng-Yi; Chen, Tung-Chien; Chen, Liang-Gee
2011-01-01
On-chip implementation of Hilbert-Huang transform (HHT) has great impact to analyze the non-linear and non-stationary biomedical signals on wearable or implantable sensors for the real-time applications. Cubic spline interpolation (CSI) consumes the most computation in HHT, and is the key component for the HHT processor. In tradition, CSI in HHT is usually performed after the collection of a large window of signals, and the long latency violates the realtime requirement of the applications. In this work, we propose to keep processing the incoming signals on-line with small and overlapped data windows without sacrificing the interpolation accuracy. 58% multiplication and 73% division of CSI are saved after the data reuse between the data windows.
Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S
2014-07-01
In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Interactive and Continuous Collision Detection for Avatars in Virtual Environments
2007-01-01
Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS collision detection Young Kim, Stephane Redon, Ming Lin, Dinesh Manocha, Jim...Redon1 Young J. Kim2 Ming C. Lin1 Dinesh Manocha1 Jim Templeman3 1 University of North Carolina at Chapel Hill 2 Ewha University, Korea 3 Naval...An offset spline approximation for plane cubic splines. Computer-Aided Design, 15(5):297– 299, 1983. [20] S. Kumar and D. Manocha. Efficient
An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jin; Nelson, Karl E.
Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.
Utilizing Serial Measures of Breast Cancer Risk Factors
1998-02-01
30 Dec 97) 6. FUNDING NUMBERS 6. AUTHOR(S) Mimi Y. Kim, Sc.D. 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) New York University...Medical Center New York, New York 10010-2598 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS( ES ) Commander U.S. Army Medical Research and...cubic splines in the model yields a smoother curve than the one fit by Rosenberg et al, which was based on a three-piece spline: two parabolas and a
An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives
Yao, Jin; Nelson, Karl E.
2018-01-24
Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.
Approximation methods for inverse problems involving the vibration of beams with tip bodies
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Two cubic spline based approximation schemes for the estimation of structural parameters associated with the transverse vibration of flexible beams with tip appendages are outlined. The identification problem is formulated as a least squares fit to data subject to the system dynamics which are given by a hybrid system of coupled ordinary and partial differential equations. The first approximation scheme is based upon an abstract semigroup formulation of the state equation while a weak/variational form is the basis for the second. Cubic spline based subspaces together with a Rayleigh-Ritz-Galerkin approach were used to construct sequences of easily solved finite dimensional approximating identification problems. Convergence results are briefly discussed and a numerical example demonstrating the feasibility of the schemes and exhibiting their relative performance for purposes of comparison is provided.
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
Analytic regularization of uniform cubic B-spline deformation fields.
Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C
2012-01-01
Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.
Meshless Solution of the Problem on the Static Behavior of Thin and Thick Laminated Composite Beams
NASA Astrophysics Data System (ADS)
Xiang, S.; Kang, G. W.
2018-03-01
For the first time, the static behavior of laminated composite beams is analyzed using the meshless collocation method based on a thin-plate-spline radial basis function. In the approximation of a partial differential equation by using a radial basis function, the shape parameter has an important role in ensuring the numerical accuracy. The choice of a shape parameter in the thin plate spline radial basis function is easier than in other radial basis functions. The governing differential equations are derived based on Reddy's third-order shear deformation theory. Numerical results are obtained for symmetric cross-ply laminated composite beams with simple-simple and cantilever boundary conditions under a uniform load. The results found are compared with available published ones and demonstrate the accuracy of the present method.
Fast digital zooming system using directionally adaptive image interpolation and restoration.
Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki
2014-01-01
This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.
Tomography for two-dimensional gas temperature distribution based on TDLAS
NASA Astrophysics Data System (ADS)
Luo, Can; Wang, Yunchu; Xing, Fei
2018-03-01
Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.
Lee, Paul H
2017-06-01
Some confounders are nonlinearly associated with dependent variables, but they are often adjusted using a linear term. The purpose of this study was to examine the error of mis-specifying the nonlinear confounding effect. We carried out a simulation study to investigate the effect of adjusting for a nonlinear confounder in the estimation of a causal relationship between the exposure and outcome in 3 ways: using a linear term, binning into 5 equal-size categories, or using a restricted cubic spline of the confounder. Continuous, binary, and survival outcomes were simulated. We examined the confounder across varying measurement error. In addition, we performed a real data analysis examining the 3 strategies to handle the nonlinear effects of accelerometer-measured physical activity in the National Health and Nutrition Examination Survey 2003-2006 data. The mis-specification of a nonlinear confounder had little impact on causal effect estimation for continuous outcomes. For binary and survival outcomes, this mis-specification introduced bias, which could be eliminated using spline adjustment only when there is small measurement error of the confounder. Real data analysis showed that the associations between high blood pressure, high cholesterol, and diabetes and mortality adjusted for physical activity with restricted cubic spline were about 3% to 11% larger than their counterparts adjusted with a linear term. For continuous outcomes, confounders with nonlinear effects can be adjusting with a linear term. Spline adjustment should be used for binary and survival outcomes on confounders with small measurement error.
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yu, E-mail: yuzhang@smu.edu.cn, E-mail: qianjinfeng08@gmail.com; Wu, Xiuxiu; Yang, Wei
2014-11-01
Purpose: The use of 4D computed tomography (4D-CT) of the lung is important in lung cancer radiotherapy for tumor localization and treatment planning. Sometimes, dense sampling is not acquired along the superior–inferior direction. This disadvantage results in an interslice thickness that is much greater than in-plane voxel resolutions. Isotropic resolution is necessary for multiplanar display, but the commonly used interpolation operation blurs images. This paper presents a super-resolution (SR) reconstruction method to enhance 4D-CT resolution. Methods: The authors assume that the low-resolution images of different phases at the same position can be regarded as input “frames” to reconstruct high-resolution images.more » The SR technique is used to recover high-resolution images. Specifically, the Demons deformable registration algorithm is used to estimate the motion field between different “frames.” Then, the projection onto convex sets approach is implemented to reconstruct high-resolution lung images. Results: The performance of the SR algorithm is evaluated using both simulated and real datasets. Their method can generate clearer lung images and enhance image structure compared with cubic spline interpolation and back projection (BP) method. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 40.8% relative to cubic spline interpolation and 10.2% versus BP. Conclusions: A new algorithm has been developed to improve the resolution of 4D-CT. The algorithm outperforms the cubic spline interpolation and BP approaches by producing images with markedly improved structural clarity and greatly reduced artifacts.« less
Random regression analyses using B-splines to model growth of Australian Angus cattle
Meyer, Karin
2005-01-01
Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011
Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel
NASA Astrophysics Data System (ADS)
Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads
2015-03-01
Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).
NASA Astrophysics Data System (ADS)
Jin, Renchao; Liu, Yongchuan; Chen, Mi; Zhang, Sheng; Song, Enmin
2018-01-01
A robust contour propagation method is proposed to help physicians delineate lung tumors on all phase images of four-dimensional computed tomography (4D-CT) by only manually delineating the contours on a reference phase. The proposed method models the trajectory surface swept by a contour in a respiratory cycle as a tensor-product surface of two closed cubic B-spline curves: a non-uniform B-spline curve which models the contour and a uniform B-spline curve which models the trajectory of a point on the contour. The surface is treated as a deformable entity, and is optimized from an initial surface by moving its control vertices such that the sum of the intensity similarities between the sampling points on the manually delineated contour and their corresponding ones on different phases is maximized. The initial surface is constructed by fitting the manually delineated contour on the reference phase with a closed B-spline curve. In this way, the proposed method can focus the registration on the contour instead of the entire image to prevent the deformation of the contour from being smoothed by its surrounding tissues, and greatly reduce the time consumption while keeping the accuracy of the contour propagation as well as the temporal consistency of the estimated respiratory motions across all phases in 4D-CT. Eighteen 4D-CT cases with 235 gross tumor volume (GTV) contours on the maximal inhale phase and 209 GTV contours on the maximal exhale phase are manually delineated slice by slice. The maximal inhale phase is used as the reference phase, which provides the initial contours. On the maximal exhale phase, the Jaccard similarity coefficient between the propagated GTV and the manually delineated GTV is 0.881 +/- 0.026, and the Hausdorff distance is 3.07 +/- 1.08 mm. The time for propagating the GTV to all phases is 5.55 +/- 6.21 min. The results are better than those of the fast adaptive stochastic gradient descent B-spline method, the 3D + t B-spline method and the diffeomorphic demons method. The proposed method is useful for helping physicians delineate target volumes efficiently and accurately.
[A correction method of baseline drift of discrete spectrum of NIR].
Hu, Ai-Qin; Yuan, Hong-Fu; Song, Chun-Feng; Li, Xiao-Yu
2014-10-01
In the present paper, a new correction method of baseline drift of discrete spectrum is proposed by combination of cubic spline interpolation and first order derivative. A fitting spectrum is constructed by cubic spline interpolation, using the datum in discrete spectrum as interpolation nodes. The fitting spectrum is differentiable. First order derivative is applied to the fitting spectrum to calculate derivative spectrum. The spectral wavelengths which are the same as the original discrete spectrum were taken out from the derivative spectrum to constitute the first derivative spectra of the discrete spectra, thereby to correct the baseline drift of the discrete spectra. The effects of the new method were demonstrated by comparison of the performances of multivariate models built using original spectra, direct differential spectra and the spectra pretreated by the new method. The results show that negative effects on the performance of multivariate model caused by baseline drift of discrete spectra can be effectively eliminated by the new method.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
Computer programs for smoothing and scaling airfoil coordinates
NASA Technical Reports Server (NTRS)
Morgan, H. L., Jr.
1983-01-01
Detailed descriptions are given of the theoretical methods and associated computer codes of a program to smooth and a program to scale arbitrary airfoil coordinates. The smoothing program utilizes both least-squares polynomial and least-squares cubic spline techniques to smooth interatively the second derivatives of the y-axis airfoil coordinates with respect to a transformed x-axis system which unwraps the airfoil and stretches the nose and trailing-edge regions. The corresponding smooth airfoil coordinates are then determined by solving a tridiagonal matrix of simultaneous cubic-spline equations relating the y-axis coordinates and their corresponding second derivatives. A technique for computing the camber and thickness distribution of the smoothed airfoil is also discussed. The scaling program can then be used to scale the thickness distribution generated by the smoothing program to a specific maximum thickness which is then combined with the camber distribution to obtain the final scaled airfoil contour. Computer listings of the smoothing and scaling programs are included.
Approximating a retarded-advanced differential equation that models human phonation
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2017-11-01
In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.
NASA Astrophysics Data System (ADS)
Cheng, Ju; Lu, Jian; Zhang, Hong-Chao; Lei, Feng; Sardar, Maryam; Bian, Xin-Tian; Zuo, Fen; Shen, Zhong-Hua; Ni, Xiao-Wu; Shi, Jin
2018-05-01
Not Available Supported by the National Natural Science Foundation of China under Grant No 11604115, the Educational Commissionof Jiangsu Province of China under Grant No 17KJA460004, and the Huaian Science and Technology Funds under Grant NoHAC201701.
Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.
Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M
2017-05-15
We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.
Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain
Osechinskiy, Sergey; Kruggel, Frithjof
2011-01-01
Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function. PMID:22567290
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.
Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms
NASA Astrophysics Data System (ADS)
Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie
2006-02-01
This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.
1975-01-01
The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.
Random regression analyses using B-spline functions to model growth of Nellore cattle.
Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G
2012-02-01
The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.
Hernandez, Andrew M; Boone, John M
2014-04-01
Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R(2)) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, "Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector," Phys. Med. Biol. 24, 505-517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB). © 2014 American Association of Physicists in Medicine.
Hernandez, Andrew M.; Boone, John M.
2014-01-01
Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using paired t-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R2) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Conclusions: Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB). PMID:24694149
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, Andrew M.; Boone, John M., E-mail: john.boone@ucdmc.ucdavis.edu
Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervalsmore » from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R{sup 2}) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Conclusions: Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB)« less
Quadratic spline subroutine package
Rasmussen, Lowell A.
1982-01-01
A continuous piecewise quadratic function with continuous first derivative is devised for approximating a single-valued, but unknown, function represented by a set of discrete points. The quadratic is proposed as a treatment intermediate between using the angular (but reliable, easily constructed and manipulated) piecewise linear function and using the smoother (but occasionally erratic) cubic spline. Neither iteration nor the solution of a system of simultaneous equations is necessary to determining the coefficients. Several properties of the quadratic function are given. A set of five short FORTRAN subroutines is provided for generating the coefficients (QSC), finding function value and derivatives (QSY), integrating (QSI), finding extrema (QSE), and computing arc length and the curvature-squared integral (QSK). (USGS)
Modeling of time trends and interactions in vital rates using restricted regression splines.
Heuer, C
1997-03-01
For the analysis of time trends in incidence and mortality rates, the age-period-cohort (apc) model has became a widely accepted method. The considered data are arranged in a two-way table by age group and calendar period, which are mostly subdivided into 5- or 10-year intervals. The disadvantage of this approach is the loss of information by data aggregation and the problems of estimating interactions in the two-way layout without replications. In this article we show how splines can be useful when yearly data, i.e., 1-year age groups and 1-year periods, are given. The estimated spline curves are still smooth and represent yearly changes in the time trends. Further, it is straightforward to include interaction terms by the tensor product of the spline functions. If the data are given in a nonrectangular table, e.g., 5-year age groups and 1-year periods, the period and cohort variables can be parameterized by splines, while the age variable is parameterized as fixed effect levels, which leads to a semiparametric apc model. An important methodological issue in developing the nonparametric and semiparametric models is stability of the estimated spline curve at the boundaries. Here cubic regression splines will be used, which are constrained to be linear in the tails. Another point of importance is the nonidentifiability problem due to the linear dependency of the three time variables. This will be handled by decomposing the basis of each spline by orthogonal projection into constant, linear, and nonlinear terms, as suggested by Holford (1983, Biometrics 39, 311-324) for the traditional apc model. The advantage of using splines for yearly data compared to the traditional approach for aggregated data is the more accurate curve estimation for the nonlinear trend changes and the simple way of modeling interactions between the time variables. The method will be demonstrated with hypothetical data as well as with cancer mortality data.
SolTrace FAQs | Concentrating Solar Power | NREL
that should be noted: when using a cubic spline file to describe a surface, if that file contains a help for script functions? A: Yes, when typing a script function within the scripting window, as you results. When the limit is reached, SolTrace generates the following error message (Windows version; Mac
Noise correction on LANDSAT images using a spline-like algorithm
NASA Technical Reports Server (NTRS)
Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.
1985-01-01
Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.
Gradient design for liquid chromatography using multi-scale optimization.
López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C
2018-01-26
In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zabihi, F.; Saffarian, M.
2016-07-01
The aim of this article is to obtain the numerical solution of the two-dimensional KdV-Burgers equation. We construct the solution by using a different approach, that is based on using collocation points. The solution is based on using the thin plate splines radial basis function, which builds an approximated solution with discretizing the time and the space to small steps. We use a predictor-corrector scheme to avoid solving the nonlinear system. The results of numerical experiments are compared with analytical solutions to confirm the accuracy and efficiency of the presented scheme.
[Relationship between shift work and overweight/obesity in male steel workers].
Xiao, M Y; Wang, Z Y; Fan, H M; Che, C L; Lu, Y; Cong, L X; Gao, X J; Liu, Y J; Yuan, J X; Li, X M; Hu, B; Chen, Y P
2016-11-10
Objective: To investigate the relationship between shift work and overweight/obesity in male steel workers. Methods: A questionnaire survey was conducted among the male steel workers selected during health examination in Tangshan Steel Company from March 2015 to March 2016. The relationship between shift work and overweight/obesity in the male steel workers were analyzed by using logistic regression model and restricted cubic splinemodel. Results: A total of 7 262 male steel workers were surveyed, the overall prevalence of overweight/obesitywas 64.5% (4 686/7 262), the overweight rate was 34.3% and the obesity rate was 30.2%, respectively. After adjusting for age, educational level and average family income level per month by multivariable logistic regression analysis, shift work was associated with overweight/obesity and obesity in the male steel workers. The OR was 1.19(95% CI : 1.05-1.35) and 1.15(95% CI : 1.00-1.32). Restricted cubic spline model analysis showed that the relationship between shift work years and overweight/obesity in the male steel workers was a nonlinear dose response one (nonlinear test χ 2 =7.43, P <0.05). Restricted cubic spline model analysis showed that the relationship between shift work years and obesity in the male steel workers was a nonlinear dose response one (nonlinear test χ 2 =10.48, P <0.05). Conclusion: Shift work was associated with overweight and obesity in the male steel workers, and shift work years and overweight/obesity had a nonlinear relationship.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Li, Zheng; Moore, Kathleen; Thai, Theresa; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
Ovarian cancer is the second most common cancer amongst gynecologic malignancies, and has the highest death rate. Since the majority of ovarian cancer patients (>75%) are diagnosed in the advanced stage with tumor metastasis, chemotherapy is often required after surgery to remove the primary ovarian tumors. In order to quickly assess patient response to the chemotherapy in the clinical trials, two sets of CT examinations are taken pre- and post-therapy (e.g., after 6 weeks). Treatment efficacy is then evaluated based on Response Evaluation Criteria in Solid Tumors (RECIST) guideline, whereby tumor size is measured by the longest diameter on one CT image slice and only a subset of selected tumors are tracked. However, this criterion cannot fully represent the volumetric changes of the tumors and might miss potentially problematic unmarked tumors. Thus, we developed a new CAD approach to measure and analyze volumetric tumor growth/shrinkage using a cubic B-spline deformable image registration method. In this initial study, on 14 sets of pre- and post-treatment CT scans, we registered the two consecutive scans using cubic B-spline registration in a multiresolution (from coarse to fine) framework. We used Mattes mutual information metric as the similarity criterion and the L-BFGS-B optimizer. The results show that our method can quantify volumetric changes in the tumors more accurately than RECIST, and also detect (highlight) potentially problematic regions that were not originally targeted by radiologists. Despite the encouraging results of this preliminary study, further validation of scheme performance is required using large and diverse datasets in future.
Compression of contour data through exploiting curve-to-curve dependence
NASA Technical Reports Server (NTRS)
Yalabik, N.; Cooper, D. B.
1975-01-01
An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.
A general framework for parametric survival analysis.
Crowther, Michael J; Lambert, Paul C
2014-12-30
Parametric survival models are being increasingly used as an alternative to the Cox model in biomedical research. Through direct modelling of the baseline hazard function, we can gain greater understanding of the risk profile of patients over time, obtaining absolute measures of risk. Commonly used parametric survival models, such as the Weibull, make restrictive assumptions of the baseline hazard function, such as monotonicity, which is often violated in clinical datasets. In this article, we extend the general framework of parametric survival models proposed by Crowther and Lambert (Journal of Statistical Software 53:12, 2013), to incorporate relative survival, and robust and cluster robust standard errors. We describe the general framework through three applications to clinical datasets, in particular, illustrating the use of restricted cubic splines, modelled on the log hazard scale, to provide a highly flexible survival modelling framework. Through the use of restricted cubic splines, we can derive the cumulative hazard function analytically beyond the boundary knots, resulting in a combined analytic/numerical approach, which substantially improves the estimation process compared with only using numerical integration. User-friendly Stata software is provided, which significantly extends parametric survival models available in standard software. Copyright © 2014 John Wiley & Sons, Ltd.
Construction of a WMR for trajectory tracking control: experimental results.
Silva-Ortigoza, R; Márquez-Sánchez, C; Marcelino-Aranda, M; Marciano-Melchor, M; Silva-Ortigoza, G; Bautista-Quintero, R; Ramos-Silvestre, E R; Rivera-Díaz, J C; Muñoz-Carrillo, D
2013-01-01
This paper reports a solution for trajectory tracking control of a differential drive wheeled mobile robot (WMR) based on a hierarchical approach. The general design and construction of the WMR are described. The hierarchical controller proposed has two components: a high-level control and a low-level control. The high-level control law is based on an input-output linearization scheme for the robot kinematic model, which provides the desired angular velocity profiles that the WMR has to track in order to achieve the desired position (x∗, y∗) and orientation (φ∗). Then, a low-level control law, based on a proportional integral (PI) approach, is designed to control the velocity of the WMR wheels to ensure those tracking features. Regarding the trajectories, this paper provides the solution or the following cases: (1) time-varying parametric trajectories such as straight lines and parabolas and (2) smooth curves fitted by cubic splines which are generated by the desired data points {(x₁∗, y₁∗),..., (x(n)∗, y(n)∗)}. A straightforward algorithm is developed for constructing the cubic splines. Finally, this paper includes an experimental validation of the proposed technique by employing a DS1104 dSPACE electronic board along with MATLAB/Simulink software.
Construction of a WMR for Trajectory Tracking Control: Experimental Results
Silva-Ortigoza, R.; Márquez-Sánchez, C.; Marcelino-Aranda, M.; Marciano-Melchor, M.; Silva-Ortigoza, G.; Bautista-Quintero, R.; Ramos-Silvestre, E. R.; Rivera-Díaz, J. C.; Muñoz-Carrillo, D.
2013-01-01
This paper reports a solution for trajectory tracking control of a differential drive wheeled mobile robot (WMR) based on a hierarchical approach. The general design and construction of the WMR are described. The hierarchical controller proposed has two components: a high-level control and a low-level control. The high-level control law is based on an input-output linearization scheme for the robot kinematic model, which provides the desired angular velocity profiles that the WMR has to track in order to achieve the desired position (x∗, y∗) and orientation (φ∗). Then, a low-level control law, based on a proportional integral (PI) approach, is designed to control the velocity of the WMR wheels to ensure those tracking features. Regarding the trajectories, this paper provides the solution or the following cases: (1) time-varying parametric trajectories such as straight lines and parabolas and (2) smooth curves fitted by cubic splines which are generated by the desired data points {(x1∗, y1∗),..., (xn∗, yn∗)}. A straightforward algorithm is developed for constructing the cubic splines. Finally, this paper includes an experimental validation of the proposed technique by employing a DS1104 dSPACE electronic board along with MATLAB/Simulink software. PMID:23997679
Modeling total cholesterol as predictor of mortality: the low-cholesterol paradox.
Wesley, David; Cox, Hugh F
2011-01-01
Elevated total cholesterol is well-established as a risk factor for coronary artery disease and cardiovascular mortality. However, less attention is paid to the association between low cholesterol levels and mortality--the low cholesterol paradox. In this paper, restricted cubic splines (RCS) and complex survey methodology are used to show the low-cholesterol paradox is present in the laboratory, examination, and mortality follow-up data from the Third National Health and Nutrition Examination Survey (NHANES III). A series of Cox proportional hazard models, demonstrate that RCS are necessary to incorporate desired covariates while avoiding the use of categorical variables. Valid concerns regarding the accuracy of such predictive models are discussed. The one certain conclusion is that low cholesterol levels are markers for excess mortality, just as are high levels. Restricted cubic splines provide the necessary flexibility to demonstrate the U-shaped relationship between cholesterol and mortality without resorting to binning results. Cox PH models perform well at identifying associations between risk factors and outcomes of interest such as mortality. However, the predictions from such a model may not be as accurate as common statistics suggest and predictive models should be used with caution.
Computer modeling of in terferograms of flowing plasma and determination of the phase shift
NASA Astrophysics Data System (ADS)
Blažek, J.; Kříž, P.; Stach, V.
2000-03-01
Interferograms of the flowing gas contain information about the phase shift between the object and the reference beams. The determination of the phase shift is the first step in getting information about the inner distribution of the density in cylindrically symmetric discharges. Slightly modified Takeda method based on the Fourier transformation is applied to determine the phase information from the interferogram. The least squares spline approximation is used for approximation and smoothing intensity profiles. At the same time, cubic splines with their end-knots conditions naturally realize “hanning windows” eliminating unwanted edge effects. For the purpose of numerical testing of the method, we developed a code that for a density given in advance reconstructs the corresponding interferogram.
NASA Astrophysics Data System (ADS)
Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu
2015-12-01
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zainudin, Mohd Lutfi, E-mail: mdlutfi07@gmail.com; Institut Matematik Kejuruteraan; Saaban, Azizan, E-mail: azizan.s@uum.edu.my
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputedmore » data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.« less
Computation of free oscillations of the earth
Buland, Raymond P.; Gilbert, F.
1984-01-01
Although free oscillations of the Earth may be computed by many different methods, numerous practical considerations have led us to use a Rayleigh-Ritz formulation with piecewise cubic Hermite spline basis functions. By treating the resulting banded matrix equation as a generalized algebraic eigenvalue problem, we are able to achieve great accuracy and generality and a high degree of automation at a reasonable cost. ?? 1984.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
Added effect of heat wave on mortality in Seoul, Korea.
Lee, Won Kyung; Lee, Hye Ah; Lim, Youn Hee; Park, Hyesook
2016-05-01
A heat wave could increase mortality owing to high temperature. However, little is known about the added (duration) effect of heat wave from the prolonged period of high temperature on mortality and different effect sizes depending on the definition of heat waves and models. A distributed lag non-linear model with a quasi-Poisson distribution was used to evaluate the added effect of heat wave on mortality after adjusting for long-term and intra-seasonal trends and apparent temperature. We evaluated the cumulative relative risk of the added wave effect on mortality on lag days 0-30. The models were constructed using nine definitions of heat wave and two relationships (cubic spline and linear threshold model) between temperature and mortality to leave out the high temperature effect. Further, we performed sensitivity analysis to evaluate the changes in the effect of heat wave on mortality according to the different degrees of freedom for time trend and cubic spline of temperature. We found that heat wave had the added effect from the prolonged period of high temperature on mortality and it was considerable in the aspect of cumulative risk because of the lagged influence. When heat wave was defined with a threshold of 98th percentile temperature and ≥2, 3, and 4 consecutive days, mortality increased by 14.8 % (7.5-22.6, 95 % confidence interval (CI)), 18.1 % (10.8-26.0, 95 % CI), 18.1 % (10.7-25.9, 95 % CI), respectively, in cubic spline model. When it came to the definitions of 90th and 95th percentile, the risk increase in mortality declined to 3.7-5.8 % and 8.6-11.3 %, respectively. This effect was robust to the flexibility of the model for temperature and time trend, while the definitions of a heat wave were critical in estimating its relationship with mortality. This finding could help deepen our understanding and quantifying of the relationship between heat wave and mortality and select an appropriate definition of heat wave and temperature model in the future studies.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.
Sidek, Khairul Azami; Khalil, Ibrahim
2013-01-01
Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.
Defining window-boundaries for genomic analyses using smoothing spline techniques
Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; ...
2015-04-17
High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore » and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1984-01-01
Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.
NASA Technical Reports Server (NTRS)
Palazzolo, Alan B.; Venkataraman, Balaji; Padavala, Sathya S.; Ryan, Steve; Vallely, Pat; Funston, Kerry
1996-01-01
This paper highlights the accomplishments on a joint effort between NASA - Marshall Space Flight Center and Texas A and M University to develop accurate seal analysis software for use in rocket turbopump design, design audits and trouble shooting. Results for arbitrary clearance profile, transient simulation, thermal effects solution and flexible seal wall model are presented. A new solution for eccentric seals based on cubic spline interpolation and ordinary differential equation integration is also presented.
Subgrouping Chronic Fatigue Syndrome Patients By Genetic and Immune Profiling
2015-12-01
participant inclusion was also verified against our master demographic file. This process revealed that only a small percentage of participants (...the ! ! − !!! , ∈ ℤ!| ≤ 7 , is a cubic -spline basis on three knots, ! is value of outcome for batch control, and is residual ...tests. Specifically, -value adjustments will employ an 8 adaptive two- stage linear step-up procedure to control the FDR at 5% (Benjamani et al. 2006
Improved algorithms for the retrieval of the h2 Love number of Mercury from laser altimetry data
NASA Astrophysics Data System (ADS)
Thor, Robin; Kallenbach, Reinald; Christensen, Ulrich; Oberst, Jürgen; Stark, Alexander; Steinbrügge, Gregor
2017-04-01
We simulate measurements to be performed by the BepiColombo laser altimeter (BELA) aboard the Mercury Planetary Orbiter (MPO) of the BepiColombo mission and investigate whether coverage and accuracy will be sufficient to retrieve the h2 Love number of Mercury. The h2 Love number describes the tidal response of Mercury's surface and is a function of the materials in its interior and their properties and distribution. Therefore, it can serve as an important constraint for models of the internal structure. The tide-generating potential from the Sun causes periodic radial displacements of up to ˜2 m on Mercury which can be detected by laser altimetry. In this study, we simultaneously extract the static global shape, parametrized by local basis functions, and its variability in time. The usage of cubic splines as local basis functions in both longitudinal and latitudinal direction provides an improvement over the methodology of Koch et al. (2010, Planetary and Space Science, 58(14), 2022-2030) who used cubic splines in longitudinal direction, but only step functions in latitudinal direction. We achieve a relative 1σ accuracy of the h2 Love number of 1.7% assuming nominal data acquisition for BELA during a one-year mission, but considering only stochastic noise.
Lee, Woohyung; Han, Ho-Seong; Ahn, Soyeon; Yoon, Yoo-Seok; Cho, Jai Young; Choi, YoungRok
2018-01-17
The relationship between resection margin (RM) and recurrence of resected hepatocellular carcinoma (HCC) is unclear. We reviewed clinical data for 419 patients with HCC. The oncologic outcomes were compared between 2 groups of patients classified according to the inflexion point of the restricted cubic spline plot. The patients were divided according to an RM of <1 cm (n = 233; narrow RM group) or ≥1 cm (n = 186; wide RM group). The 5-year recurrence-free survival (RFS) rate was lower (34.8 vs. 43.8%, p = 0.042) and recurrence near the resection site was more frequent (4.7 vs. 0%, p = 0.010) in the narrow RM group. Patients with multiple lesions, or prior transarterial chemoembolization (TACE) or radiofrequency ablation (RFA) were excluded from subgroup analyses. In patients with a 2-5 cm HCC, the 5-year RFS was greater in the wide RM group (54.4 vs. 32.5%, p = 0.036). Narrow RM (hazard ratio 1.750, 95% CI 1.029-2.976, p = 0.039) was independently associated with disease recurrence. In patients with a single 2-5 cm HCC without prior TACE/RFA, an RM of ≥1 cm was associated with lower risk of recurrence after liver resection. © 2018 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
A cubic spline based Galerkin-like method is developed for the identification of a class of hybrid systems which describe the transverse vibration to flexible beams with attached tip bodies. The identification problem is formulated as a least squares fit to data subject to the system dynamics given by a coupled system of ordnary and partial differential equations recast as an abstract evolution equation (AEE) in an appropriate infinite dimensional Hilbert space. Projecting the AEE into spline-based subspaces leads naturally to a sequence of approximating finite dimensional identification problems. The solutions to these problems are shown to exist, are relatively easily computed, and are shown to, in some sense, converge to solutions to the original identification problem. Numerical results for a variety of examples are discussed.
NASA Technical Reports Server (NTRS)
Ruo, S. Y.
1978-01-01
A computer program was developed to account approximately for the effects of finite wing thickness in transonic potential flow over an oscillation wing of finite span. The program is based on the original sonic box computer program for planar wing which was extended to account for the effect of wing thickness. Computational efficiency and accuracy were improved and swept trailing edges were accounted for. Account for the nonuniform flow caused by finite thickness was made by application of the local linearization concept with appropriate coordinate transformation. A brief description of each computer routine and the applications of cubic spline and spline surface data fitting techniques used in the program are given, and the method of input was shown in detail. Sample calculations as well as a complete listing of the computer program listing are presented.
Intensity Conserving Spectral Fitting
NASA Technical Reports Server (NTRS)
Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.
2015-01-01
The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.
Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M
2018-04-01
The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.
2004-08-01
Mutual Information (NMI) voxel match algorithm of the ANALYZE software package and cubic spline interpolation (Brownell et al. 2003, Appendix). 2...nuclear inclusion and cell survival. Materials and Methods Animals: Male transgenic R6/2 mice, which depict many clinical features of juvenile HD were...purchased from the Jackson Laboratories (Bar Harbor, ME). The mice were housed 3-4 per cage under standard conditions with free access to food and water
The Control Based on Internal Average Kinetic Energy in Complex Environment for Multi-robot System
NASA Astrophysics Data System (ADS)
Yang, Mao; Tian, Yantao; Yin, Xianghua
In this paper, reference trajectory is designed according to minimum energy consumed for multi-robot system, which nonlinear programming and cubic spline interpolation are adopted. The control strategy is composed of two levels, which lower-level is simple PD control and the upper-level is based on the internal average kinetic energy for multi-robot system in the complex environment with velocity damping. Simulation tests verify the effectiveness of this control strategy.
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE
Non-parametric estimation of population size changes from the site frequency spectrum.
Waltoft, Berit Lindum; Hobolth, Asger
2018-06-11
Changes in population size is a useful quantity for understanding the evolutionary history of a species. Genetic variation within a species can be summarized by the site frequency spectrum (SFS). For a sample of size n, the SFS is a vector of length n - 1 where entry i is the number of sites where the mutant base appears i times and the ancestral base appears n - i times. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from an observed SFS. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the changes in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the observed SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on unfolded and folded SFS from 26 different human populations from the 1000 Genomes Project.
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
NASA Astrophysics Data System (ADS)
Fradinata, Edy; Marli Kesuma, Zurnila
2018-05-01
Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
Element free Galerkin formulation of composite beam with longitudinal slip
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Dzulkarnain; Mokhtaram, Mokhtazul Haizad; Badli, Mohd Iqbal
2015-05-15
Behaviour between two materials in composite beam is assumed partially interact when longitudinal slip at its interfacial surfaces is considered. Commonly analysed by the mesh-based formulation, this study used meshless formulation known as Element Free Galerkin (EFG) method in the beam partial interaction analysis, numerically. As meshless formulation implies that the problem domain is discretised only by nodes, the EFG method is based on Moving Least Square (MLS) approach for shape functions formulation with its weak form is developed using variational method. The essential boundary conditions are enforced by Langrange multipliers. The proposed EFG formulation gives comparable results, after beenmore » verified by analytical solution, thus signify its application in partial interaction problems. Based on numerical test results, the Cubic Spline and Quartic Spline weight functions yield better accuracy for the EFG formulation, compares to other proposed weight functions.« less
Synthesis of freeform refractive surfaces forming various radiation patterns using interpolation
NASA Astrophysics Data System (ADS)
Voznesenskaya, Anna; Mazur, Iana; Krizskiy, Pavel
2017-09-01
Optical freeform surfaces are very popular today in such fields as lighting systems, sensors, photovoltaic concentrators, and others. The application of such surfaces allows to obtain systems with a new quality with a reduced number of optical components to ensure high consumer characteristics: small size, weight, high optical transmittance. This article presents the methods of synthesis of refractive surface for a given source and the radiation pattern of various shapes using a computer simulation cubic spline interpolation.
Teichert, Gregory H.; Gunda, N. S. Harsha; Rudraraju, Shiva; ...
2016-12-18
Free energies play a central role in many descriptions of equilibrium and non-equilibrium properties of solids. Continuum partial differential equations (PDEs) of atomic transport, phase transformations and mechanics often rely on first and second derivatives of a free energy function. The stability, accuracy and robustness of numerical methods to solve these PDEs are sensitive to the particular functional representations of the free energy. In this communication we investigate the influence of different representations of thermodynamic data on phase field computations of diffusion and two-phase reactions in the solid state. First-principles statistical mechanics methods were used to generate realistic free energymore » data for HCP titanium with interstitially dissolved oxygen. While Redlich-Kister polynomials have formed the mainstay of thermodynamic descriptions of multi-component solids, they require high order terms to fit oscillations in chemical potentials around phase transitions. Here, we demonstrate that high fidelity fits to rapidly fluctuating free energy functions are obtained with spline functions. As a result, spline functions that are many degrees lower than Redlich-Kister polynomials provide equal or superior fits to chemical potential data and, when used in phase field computations, result in solution times approaching an order of magnitude speed up relative to the use of Redlich-Kister polynomials.« less
Sirot, V; Dumas, C; Desquilbet, L; Mariotti, F; Legrand, P; Catheline, D; Leblanc, J-C; Margaritis, I
2012-04-01
Fish, especially fatty fish, are the main contributor to eicosapentaenoic (EPA) and docosahexaenoic (DHA) intake. EPA and DHA concentrations in red blood cells (RBC) has been proposed as a cardiovascular risk factor, with <4% and >8% associated with the lowest and greatest protection, respectively. The relationship between high fat fish (HFF) intake and RBC EPA + DHA content has been little investigated on a wide range of fish intake, and may be non-linear. We aimed to study the shape of this relationship among high seafood consumers. Seafood consumption records and blood were collected from 384 French heavy seafood consumers and EPA and DHA were measured in RBC. A multivariate linear regression was performed using restricted cubic splines to consider potential non-linear associations. Thirty-six percent of subjects had an RBC EPA + DHA content lower than 4% and only 5% exceeded 8%. HFF consumption was significantly associated with RBC EPA + DHA content (P [overall association] = 0.021) adjusted for sex, tobacco status, study area, socioeconomic status, age, alcohol, other seafood, meat, and meat product intakes. This relationship was non-linear: for intakes higher than 200 g/wk, EPA + DHA content tended to stagnate. Tobacco status and fish contaminants were negatively associated with RBC EPA + DHA content. Because of the saturation for high intakes, and accounting for the concern with exposure to trace element contaminants, intake not exceeding 200 g should be considered. Copyright © 2010 Elsevier B.V. All rights reserved.
Humanoid robot Lola: design and walking control.
Buschmann, Thomas; Lohmeier, Sebastian; Ulbrich, Heinz
2009-01-01
In this paper we present the humanoid robot LOLA, its mechatronic hardware design, simulation and real-time walking control. The goal of the LOLA-project is to build a machine capable of stable, autonomous, fast and human-like walking. LOLA is characterized by a redundant kinematic configuration with 7-DoF legs, an extremely lightweight design, joint actuators with brushless motors and an electronics architecture using decentralized joint control. Special emphasis was put on an improved mass distribution of the legs to achieve good dynamic performance. Trajectory generation and control aim at faster, more flexible and robust walking. Center of mass trajectories are calculated in real-time from footstep locations using quadratic programming and spline collocation methods. Stabilizing control uses hybrid position/force control in task space with an inner joint position control loop. Inertial stabilization is achieved by modifying the contact force trajectories.
Plant Growth Biophysics: the Basis for Growth Asymmetry Induced by Gravity
NASA Technical Reports Server (NTRS)
Cosgrove, D.
1985-01-01
The identification and quantification of the physical properties altered by gravity when plant stems grow upward was studied. Growth of the stem in vertical and horizontal positions was recorded by time lapse photography. A computer program that uses a cubic spline fitting algorithm was used to calculate the growth rate and curvature of the stem as a function of time. Plant stems were tested to ascertain whether cell osmotic pressure was altered by gravity. A technique for measuring the yielding properties of the cell wall was developed.
Analysis of the cylinder’s movement characteristics after entering water based on CFD
NASA Astrophysics Data System (ADS)
Liu, Xianlong
2017-10-01
It’s a variable speed motion after the cylinder vertical entry the water. Using dynamic mesh is mostly unstructured grid, and the calculation results are not ideal and consume huge computing resources. CFD method is used to calculate the resistance of the cylinder at different velocities. Cubic spline interpolation method is used to obtain the resistance at fixed speeds. The finite difference method is used to solve the motion equation, and the acceleration, velocity, displacement and other physical quantities are obtained after the cylinder enters the water.
Legendre-tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
Reference values of anthropometric measurements in Dutch children. The Oosterwolde Study.
Gerver, W J; Drayer, N M; Schaafsma, W
1989-03-01
In the period 1979-1980 the following anthropometric measurements were recorded in 2351 healthy Dutch children from 0-17 years of age: height, weight, sitting height, arm span, lengths of upper-arm, lower-arm and hand, tibial length, foot length, biacromial diameter, biiliacal diameter, and head circumference. Corresponding percentile values were constructed on the basis of normality assumptions, the mean and standard deviation at age t being determined by a cubic spline approximation. The results are compared with other studies and given in the form of growth charts.
Shao, Xueguang; Yu, Zhengliang; Ma, Chaoxiong
2004-06-01
An improved method is proposed for the quantitative determination of multicomponent overlapping chromatograms based on a known transmutation method. To overcome the main limitation of the transmutation method caused by the oscillation generated in the transmutation process, two techniques--wavelet transform smoothing and the cubic spline interpolation for reducing data points--were adopted, and a new criterion was also developed. By using the proposed algorithm, the oscillation can be suppressed effectively, and quantitative determination of the components in both the simulated and experimental overlapping chromatograms is successfully obtained.
NASA Astrophysics Data System (ADS)
Dutykh, Denys; Hoefer, Mark; Mitsotakis, Dimitrios
2018-04-01
Some effects of surface tension on fully nonlinear, long, surface water waves are studied by numerical means. The differences between various solitary waves and their interactions in subcritical and supercritical surface tension regimes are presented. Analytical expressions for new peaked traveling wave solutions are presented in the dispersionless case of critical surface tension. Numerical experiments are performed using a high-accurate finite element method based on smooth cubic splines and the four-stage, classical, explicit Runge-Kutta method of order 4.
Legendre-Tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1983-01-01
The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.
Vertical discretization with finite elements for a global hydrostatic model on the cubed sphere
NASA Astrophysics Data System (ADS)
Yi, Tae-Hyeong; Park, Ja-Rin
2017-06-01
A formulation of Galerkin finite element with basis-spline functions on a hybrid sigma-pressure coordinate is presented to discretize the vertical terms of global Eulerian hydrostatic equations employed in a numerical weather prediction system, which is horizontally discretized with high-order spectral elements on a cubed sphere grid. This replaces the vertical discretization of conventional central finite difference that is first-order accurate in non-uniform grids and causes numerical instability in advection-dominant flows. Therefore, a model remains in the framework of Galerkin finite elements for both the horizontal and vertical spatial terms. The basis-spline functions, obtained from the de-Boor algorithm, are employed to derive both the vertical derivative and integral operators, since Eulerian advection terms are involved. These operators are used to discretize the vertical terms of the prognostic and diagnostic equations. To verify the vertical discretization schemes and compare their performance, various two- and three-dimensional idealized cases and a hindcast case with full physics are performed in terms of accuracy and stability. It was shown that the vertical finite element with the cubic basis-spline function is more accurate and stable than that of the vertical finite difference, as indicated by faster residual convergence, fewer statistical errors, and reduction in computational mode. This leads to the general conclusion that the overall performance of a global hydrostatic model might be significantly improved with the vertical finite element.
Waveform fitting and geometry analysis for full-waveform lidar feature extraction
NASA Astrophysics Data System (ADS)
Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu
2016-10-01
This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.
Liu, Xuejiao; Zhang, Dongdong; Liu, Yu; Sun, Xizhuo; Han, Chengyi; Wang, Bingyuan; Ren, Yongcheng; Zhou, Junmei; Zhao, Yang; Shi, Yuanyuan; Hu, Dongsheng; Zhang, Ming
2017-05-01
Despite the inverse association between physical activity (PA) and incident hypertension, a comprehensive assessment of the quantitative dose-response association between PA and hypertension has not been reported. We performed a meta-analysis, including dose-response analysis, to quantitatively evaluate this association. We searched PubMed and Embase databases for articles published up to November 1, 2016. Random effects generalized least squares regression models were used to assess the quantitative association between PA and hypertension risk across studies. Restricted cubic splines were used to model the dose-response association. We identified 22 articles (29 studies) investigating the risk of hypertension with leisure-time PA or total PA, including 330 222 individuals and 67 698 incident cases of hypertension. The risk of hypertension was reduced by 6% (relative risk, 0.94; 95% confidence interval, 0.92-0.96) with each 10 metabolic equivalent of task h/wk increment of leisure-time PA. We found no evidence of a nonlinear dose-response association of PA and hypertension ( P nonlinearity =0.094 for leisure-time PA and 0.771 for total PA). With the linear cubic spline model, when compared with inactive individuals, for those who met the guidelines recommended minimum level of moderate PA (10 metabolic equivalent of task h/wk), the risk of hypertension was reduced by 6% (relative risk, 0.94; 95% confidence interval, 0.92-0.97). This meta-analysis suggests that additional benefits for hypertension prevention occur as the amount of PA increases. © 2017 American Heart Association, Inc.
[Elastic registration method to compute deformation functions for mitral valve].
Yang, Jinyu; Zhang, Wan; Yin, Ran; Deng, Yuxiao; Wei, Yunfeng; Zeng, Junyi; Wen, Tong; Ding, Lu; Liu, Xiaojian; Li, Yipeng
2014-10-01
Mitral valve disease is one of the most popular heart valve diseases. Precise positioning and displaying of the valve characteristics is necessary for the minimally invasive mitral valve repairing procedures. This paper presents a multi-resolution elastic registration method to compute the deformation functions constructed from cubic B-splines in three dimensional ultrasound images, in which the objective functional to be optimized was generated by maximum likelihood method based on the probabilistic distribution of the ultrasound speckle noise. The algorithm was then applied to register the mitral valve voxels. Numerical results proved the effectiveness of the algorithm.
Fourth order scheme for wavelet based solution of Black-Scholes equation
NASA Astrophysics Data System (ADS)
Finěk, Václav
2017-12-01
The present paper is devoted to the numerical solution of the Black-Scholes equation for pricing European options. We apply the Crank-Nicolson scheme with Richardson extrapolation for time discretization and Hermite cubic spline wavelets with four vanishing moments for space discretization. This scheme is the fourth order accurate both in time and in space. Computational results indicate that the Crank-Nicolson scheme with Richardson extrapolation significantly decreases the amount of computational work. We also numerically show that optimal convergence rate for the used scheme is obtained without using startup procedure despite the data irregularities in the model.
Trajectory Generation by Piecewise Spline Interpolation
1976-04-01
Lx) -a 0 + atx + aAx + x (21)0 1 2 3 and the coefficients are obtained from Equation (20) as ao m fl (22)i al " fi, (23) S3(fi + I f ) 2fj + fj+ 1 (24...reference frame to the vehicle fixed frame is pTO’ 0TO’ OTO’ *TO where a if (gZv0 - A >- 0 aCI (64) - azif (gzv0- AzvO < 0 These rotations may be...velocity frame axes directions (velocity frame from the output frame) aO, al , a 2 , a 3 Coefficients of the piecewise cubic polynomials [B ] Tridiagonal
Cousens, Simon; Blencowe, Hannah; Stanton, Cynthia; Chou, Doris; Ahmed, Saifuddin; Steinhardt, Laura; Creanga, Andreea A; Tunçalp, Ozge; Balsara, Zohra Patel; Gupta, Shivam; Say, Lale; Lawn, Joy E
2011-04-16
Stillbirths do not count in routine worldwide data-collating systems or for the Millennium Development Goals. Two sets of national stillbirth estimates for 2000 produced similar worldwide totals of 3·2 million and 3·3 million, but rates differed substantially for some countries. We aimed to develop more reliable estimates and a time series from 1995 for 193 countries, by increasing input data, using recent data, and applying improved modelling approaches. For international comparison, stillbirth is defined as fetal death in the third trimester (≥1000 g birthweight or ≥28 completed weeks of gestation). Several sources of stillbirth data were identified and assessed against prespecified inclusion criteria: vital registration data; nationally representative surveys; and published studies identified through systematic literature searches, unpublished studies, and national data identified through a WHO country consultation process. For 2009, reported rates were used for 33 countries and model-based estimates for 160 countries. A regression model of log stillbirth rate was developed and used to predict national stillbirth rates from 1995 to 2009. Uncertainty ranges were obtained with a bootstrap approach. The final model included log(neonatal mortality rate) (cubic spline), log(low birthweight rate) (cubic spline), log(gross national income purchasing power parity) (cubic spline), region, type of data source, and definition of stillbirth. Vital registration data from 79 countries, 69 nationally representative surveys from 39 countries, and 113 studies from 42 countries met inclusion criteria. The estimated number of global stillbirths was 2·64 million (uncertainty range 2·14 million to 3·82 million) in 2009 compared with 3·03 million (uncertainty range 2·37 million to 4·19 million) in 1995. Worldwide stillbirth rate has declined by 14·5%, from 22·1 stillbirths per 1000 births in 1995 to 18·9 stillbirths per 1000 births in 2009. In 2009, 76·2% of stillbirths occurred in south Asia and sub-Saharan Africa. This study draws attention to the dearth of reliable data in regions where most stillbirths occur. The estimated trend in stillbirth rate reduction is slower than that for maternal mortality and lags behind the increasing progress in reducing deaths in children younger than 5 years. Improved data and improved use of data are crucial to ensure that stillbirths count in global and national policy. The Bill & Melinda Gates Foundation through the Global Alliance to Prevent Prematurity and Stillbirth, Saving Newborn Lives/Save the Children, and the International Stillbirth Alliance. The Department of Reproductive Health and Research, WHO, through the UN Development Programme, UN Population Fund, WHO, and World Bank Special Programme of Research, Development and Research Training in Human Reproduction. Copyright © 2011 Elsevier Ltd. All rights reserved.
Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W
2011-01-01
We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571
Remontet, L; Bossard, N; Belot, A; Estève, J
2007-05-10
Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chanthawara, Krittidej; Kaennakham, Sayan; Toutip, Wattana
2016-02-01
The methodology of Dual Reciprocity Boundary Element Method (DRBEM) is applied to the convection-diffusion problems and investigating its performance is our first objective of the work. Seven types of Radial Basis Functions (RBF); Linear, Thin-plate Spline, Cubic, Compactly Supported, Inverse Multiquadric, Quadratic, and that proposed by [12], were closely investigated in order to numerically compare their effectiveness drawbacks etc. and this is taken as our second objective. A sufficient number of simulations were performed covering as many aspects as possible. Varidated against both exacts and other numerical works, the final results imply strongly that the Thin-Plate Spline and Linear type of RBF are superior to others in terms of both solutions' quality and CPU-time spent while the Inverse Multiquadric seems to poorly yield the results. It is also found that DRBEM can perform relatively well at moderate level of convective force and as anticipated becomes unstable when the problem becomes more convective-dominated, as normally found in all classical mesh-dependence methods.
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
Ding, Zan; Li, Liujiu; Wei, Ruqin; Dong, Wenya; Guo, Pi; Yang, Shaoyi; Liu, Ju; Zhang, Qingying
2016-10-01
Consistent evidence has shown excess mortality associated with cold temperature, but some important details of the cold-mortality association (e.g. slope and threshold) have not been adequately investigated and few studies focused on the cold effect in high-altitude areas of developing countries. We attempted to quantify the cold effect on mortality, identify the details, and evaluate effect modification in the distinct subtropical plateau monsoon climate of Yuxi, a high plateau region in southwest China. From daily mortality and meteorological data during 2009-2014, we used a quasi-Poisson model combined with a "natural cubic spline-natural cubic spline" distributed lag non-linear model to estimate the temperature-mortality relationship and then a simpler "hockey-stick" model to investigate the cold effect and details. Cold temperature was associated with increased mortality, and the relative risk of cold effect (1st relative to 10th temperature percentile) on non-accidental, cardiovascular, and respiratory mortality for lag 0-21 days was 1.40 (95% confidence interval: 1.19-1.66), 1.61 (1.28-2.02), and 1.13 (0.78-1.64), respectively. A 1°C decrease below a cold threshold of 9.1°C (8th percentile) for lags 0-21 was associated with a 7.35% (3.75-11.09%) increase in non-accidental mortality. The cold-mortality association was not significantly affected by cause-specific mortality, gender, age, marital status, ethnicity, occupation, or previous history of hypertension. There is an adverse impact of cold on mortality in Yuxi, China, and a temperature of 9.1°C is an important cut-off for cold-related mortality for residents. Copyright © 2016 Elsevier Inc. All rights reserved.
Survival predictability of lean and fat mass in men and women undergoing maintenance hemodialysis.
Noori, Nazanin; Kovesdy, Csaba P; Dukkipati, Ramanath; Kim, Youngmee; Duong, Uyen; Bross, Rachelle; Oreopoulos, Antigone; Luna, Amanda; Benner, Debbie; Kopple, Joel D; Kalantar-Zadeh, Kamyar
2010-11-01
Larger body size is associated with greater survival in maintenance hemodialysis (MHD) patients. It is not clear how lean body mass (LBM) and fat mass (FM) compare in their associations with survival across sex in these patients. We examined the hypothesis that higher FM and LBM are associated with greater survival in MHD patents irrespective of sex. In 742 MHD patients, including 31% African Americans with a mean (± SD) age of 54 ± 15 y, we categorized men (n = 391) and women (n = 351) separately into 4 quartiles of near-infrared interactance-measured LBM and FM. Cox proportional hazards models estimated death hazard ratios (HRs) (and 95% CIs), and cubic spline models were used to examine associations with mortality over 5 y (2001-2006). After adjustment for case-mix and inflammatory markers, the highest quartiles of FM and LBM were associated with greater survival in women: HRs of 0.38 (95% CI: 0.20, 0.71) and 0.34 (95% CI: 0.17, 0.67), respectively (reference: first quartile). In men, the highest quartiles of FM and percentage FM (FM%) but not of LBM were associated with greater survival: HRs of 0.51 (95% CI: 0.27, 0.96), 0.45 (95% CI: 0.23, 0.88), and 1.17 (95% CI: 0.60, 2.27), respectively. Cubic spline analyses showed greater survival with higher FM% and higher "FM minus LBM percentiles" in both sexes, whereas a higher LBM was protective in women. In MHD patients, higher FM in both sexes and higher LBM in women appear to be protective. The survival advantage of FM appears to be superior to that of LBM. Clinical trials to examine the outcomes of interventions that modify body composition in MHD patients are indicated.
Effect of data gaps on correlation dimension computed from light curves of variable stars
NASA Astrophysics Data System (ADS)
George, Sandip V.; Ambika, G.; Misra, R.
2015-11-01
Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the Rössler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D2 value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D2 values.
Smith, Andrea D; Crippa, Alessio; Woodcock, James; Brage, Søren
2016-12-01
Inverse associations between physical activity (PA) and type 2 diabetes mellitus are well known. However, the shape of the dose-response relationship is still uncertain. This review synthesises results from longitudinal studies in general populations and uses non-linear models of the association between PA and incident type 2 diabetes. A systematic literature search identified 28 prospective studies on leisure-time PA (LTPA) or total PA and risk of type 2 diabetes. PA exposures were converted into metabolic equivalent of task (MET) h/week and marginal MET (MMET) h/week, a measure only considering energy expended above resting metabolic rate. Restricted cubic splines were used to model the exposure-disease relationship. Our results suggest an overall non-linear relationship; using the cubic spline model we found a risk reduction of 26% (95% CI 20%, 31%) for type 2 diabetes among those who achieved 11.25 MET h/week (equivalent to 150 min/week of moderate activity) relative to inactive individuals. Achieving twice this amount of PA was associated with a risk reduction of 36% (95% CI 27%, 46%), with further reductions at higher doses (60 MET h/week, risk reduction of 53%). Results for the MMET h/week dose-response curve were similar for moderate intensity PA, but benefits were greater for higher intensity PA and smaller for lower intensity activity. Higher levels of LTPA were associated with substantially lower incidence of type 2 diabetes in the general population. The relationship between LTPA and type 2 diabetes was curvilinear; the greatest relative benefits are achieved at low levels of activity, but additional benefits can be realised at exposures considerably higher than those prescribed by public health recommendations.
Kong, Alice P S; Choi, Kai Chow; Zhang, Jihui; Luk, Andrea; Lam, Siu Ping; Chan, Michael H M; Ma, Ronald C W; Chan, Juliana C N; Wing, Yun Kwok
2017-02-01
We aimed to explore the associations of sleep patterns during weekdays and weekends with glycemic control in patients with type 2 diabetes. We examined the association between indices of glycemic control [glycated hemoglobin (HbA 1c ) and fasting plasma glucose (FPG)] and sleep parameters (sleep duration, bedtime, and differences of sleep duration during weekdays and weekends) from adults with type 2 diabetes recruited in a prospective cohort enrolling from hospital medical clinics. Restricted cubic spline regression was used to examine the relationships between the glycemic indices and sleep parameters. Excluding shift workers, a total of 3508 patients enrolled between July 2010 and July 2014 were included in this analysis. Mean age was 53.9 [standard deviation (SD) 8.7] years, and mean disease duration of diabetes was 8.3 (SD 7.1) years. Fifty-nine percentage were men. Mean sleep duration during weekdays and difference of sleep durations between weekdays and weekends were 7.7 (SD 1.3) hours and 0.6 (SD 1.2) hours, respectively. Mean HbA 1c and FPG were 7.6 (1.5) % and 7.6 (2.5) mmol/L, respectively. Using restricted cubic spline regressions with successive adjustments of potential confounders, sleep duration difference between weekdays and weekends remained significantly associated with both HbA 1c and FPG in a curvilinear manner. Sleep duration of about 1 h more during weekends when compared to weekdays was associated with beneficial effect in HbA 1c (-0.13 %, 95 % confidence interval -0.24 to -0.02). In type 2 diabetes, regular sleeping habit with modest sleep compensation during weekends has positive impact on glycemic control.
Quantitative survival impact of composite treatment delays in head and neck cancer.
Ho, Allen S; Kim, Sungjin; Tighiouart, Mourad; Mita, Alain; Scher, Kevin S; Epstein, Joel B; Laury, Anna; Prasad, Ravi; Ali, Nabilah; Patio, Chrysanta; St-Clair, Jon Mallen; Zumsteg, Zachary S
2018-05-09
Multidisciplinary management of head and neck cancer (HNC) must reconcile increasingly sophisticated subspecialty care with timeliness of care. Prior studies examined the individual effects of delays in diagnosis-to-treatment interval, postoperative interval, and radiation interval but did not consider them collectively. The objective of the current study was to investigate the combined impact of these interwoven intervals on patients with HNC. Patients with HNC who underwent curative-intent surgery with radiation were identified in the National Cancer Database between 2004 and 2013. Multivariable models were constructed using restricted cubic splines to determine nonlinear relations with overall survival. Overall, 15,064 patients were evaluated. After adjustment for covariates, only prolonged postoperative interval (P < .001) and radiation interval (P < .001) independently predicted for worse outcomes, whereas the association of diagnosis-to-treatment interval with survival disappeared. By using multivariable restricted cubic spline functions, increasing postoperative interval did not affect mortality until 40 days after surgery, and each day of delay beyond this increased the risk of mortality until 70 days after surgery (hazard ratio, 1.14; 95% confidence interval, 1.01-1.28; P = .029). For radiation interval, mortality escalated continuously with each additional day of delay, plateauing at 55 days (hazard ratio, 1.25; 95% confidence interval, 1.11-1.41; P < .001). Delays beyond these change points were not associated with further survival decrements. Increasing delays in postoperative and radiation intervals are associated independently with an escalating risk of mortality that plateaus beyond certain thresholds. Delays in initiating therapy, conversely, are eclipsed in importance when appraised in conjunction with the entire treatment course. Such findings may redirect focus to streamlining those intervals that are most sensitive to delays when considering survival burden. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.
On the feasibility to integrate low-cost MEMS accelerometers and GNSS receivers
NASA Astrophysics Data System (ADS)
Benedetti, Elisa; Dermanis, Athanasios; Crespi, Mattia
2017-06-01
The aim of this research was to investigate the feasibility of merging the benefits offered by low-cost GNSS and MEMS accelerometers technology, in order to promote the diffusion of low-cost monitoring solutions. A merging approach was set up at the level of the combination of kinematic results (velocities and displacements) coming from the two kinds of sensors, whose observations were separately processed, following to the so called loose integration, which sounds much more simple and flexible thinking about the possibility of an easy change of the combined sensors. At first, the issues related to the difference in reference systems, time systems and measurement rate and epochs for the two sensors were faced with. An approach was designed and tested to transform into unique reference and time systems the outcomes from GPS and MEMS and to interpolate the usually (much) more dense MEMS observation to common (GPS) epochs. The proposed approach was limited to time-independent (constant) orientation of the MEMS reference system with respect to the GPS one. Then, a data fusion approach based on the use of Discrete Fourier Transform and cubic splines interpolation was proposed both for velocities and displacements: MEMS and GPS derived solutions are firstly separated by a rectangular filter in spectral domain, and secondly back-transformed and combined through a cubic spline interpolation. Accuracies around 5 mm for slow and fast displacements and better than 2 mm/s for velocities were assessed. The obtained solution paves the way to a powerful and appealing use of low-cost single frequency GNSS receivers and MEMS accelerometers for structural and ground monitoring applications. Some additional remarks and prospects for future investigations complete the paper.
Zhang, Dongdong; Liu, Xuejiao; Liu, Yu; Sun, Xizhuo; Wang, Bingyuan; Ren, Yongcheng; Zhao, Yang; Zhou, Junmei; Han, Chengyi; Yin, Lei; Zhao, Jingzhi; Shi, Yuanyuan; Zhang, Ming; Hu, Dongsheng
2017-10-01
Leisure-time physical activity (LTPA) has been suggested to reduce risk of metabolic syndrome (MetS). However, a quantitative comprehensive assessment of the dose-response association between LTPA and incident MetS has not been reported. We performed a meta-analysis of studies assessing the risk of MetS with LTPA. MEDLINE via PubMed and EMBase databases were searched for relevant articles published up to March 13, 2017. Random-effects models were used to estimate the summary relative risk (RR) of MetS with LTPA. Restricted cubic splines were used to model the dose-response association. We identified 16 articles (18 studies including 76,699 participants and 13,871 cases of MetS). We found a negative linear association between LTPA and incident MetS, with a reduction of 8% in MetS risk per 10 metabolic equivalent of task (MET) h/week increment. According to the restricted cubic splines model, risk of MetS was reduced 10% with LTPA performed according to the basic guideline-recommended level of 150min of moderate PA (MPA) per week (10METh/week) versus inactivity (RR=0.90, 95% CI 0.86-0.94). It was reduced 20% and 53% with LTPA at twice (20METh/week) and seven times (70METh/week) the basic recommended level (RR=0.80, 95% CI 0.74-0.88 and 0.47, 95% CI 0.34-0.64, respectively). Our findings provide quantitative data suggesting that any amount of LTPA is better than none and that LTPA substantially exceeding the current LTPA guidelines is associated with an additional reduction in MetS risk. Copyright © 2017. Published by Elsevier Inc.
Kolli, R Prakash; Seidman, David N
2014-12-01
The composition of co-precipitated and collocated NbC carbide precipitates, Fe3C iron carbide (cementite), and Cu-rich precipitates are studied experimentally by atom-probe tomography (APT). The Cu-rich precipitates located at a grain boundary (GB) are also studied. The APT results for the carbides are supplemented with computational thermodynamics predictions of composition at thermodynamic equilibrium. Two types of NbC carbide precipitates are distinguished based on their stoichiometric ratio and size. The Cu-rich precipitates at the periphery of the iron carbide and at the GB are larger than those distributed in the α-Fe (body-centered cubic) matrix, which is attributed to short-circuit diffusion of Cu along the GB. Manganese segregation is not observed at the heterophase interfaces of the Cu-rich precipitates that are located at the periphery of the iron carbide or at the GB, which is unlike those located at the edge of the NbC carbide precipitates or distributed in the α-Fe matrix. This suggests the presence of two populations of NiAl-type (B2 structure) phases at the heterophase interfaces in multicomponent Fe-Cu steels.
The geomagnetic jerk of 1969 and the DGRFs
Thompson, D.; Cain, J.C.
1987-01-01
Cubic spline fits to the DGRF/IGRF series indicate agreement with other analyses showing the 1969-1970 magnetic jerk in the h ??12 and g ??02 secular change coefficients, and agreement that the h ??11 term showed no sharp change. The variation of the g ??01 term is out of phase with other analyses indicating a likely error in its representation in the 1965-1975 interval. We recommend that future derivations of the 'definitive' geomagnetic reference models take into consideration the times of impulses or jerks so as to not be bound to a standard 5 year interval, and otherwise to make more considered analyses before adopting sets of coefficients. ?? 1987.
1983-01-01
January 5-7, 1983. (SOI iE): Hnai1 Int of Gtnphyqirdt Honohlul. To ORDER THE COMPLETE COMPILATION REPORT USE AD-A137 212 THE COMPONENT PART IS PROVIDED...Holland, 1979). Outside of the region of the wavemaker the vorticity-mixingtheory leads us to expect a down-gradient (southward) component of v’ q...calling them "mesoscale" begins to be marginal. The climatological T4 5 0 field used above is based on cubic spline fits to averages over 2’ (latitude) by
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strayer, M.R.
This talk surveys a thirteen-year collaboration with Chris Bottcher on various aspects of strong field electrodynamics. Most of the work centers on the atomic physics associated with the peripheral collisions of ultrarelativistic heavy atoms. The earliest, beginning in about 1979, dealt with the spontaneous emission of positrons from nuclear quasimolecules and touched briefly on the formation of axions as a possible explanation of the anomalous peaks in the spectrum. This work stimulated the extensive studies of particle production from coherent fields that laid the foundations for investigations of nuclear form factors, structure functions, and production mechanisms for the Higgs andmore » other exotic particles. Chris conjectured that the strong fields that are present in these collisions would give rise to nonperturbative effects. Thus, during this time, Chris also worked to develop basis-spline collocation methods for solving dynamical relativistic fermions in super strong fields. This was perhaps one of the best of times for Chris; on these problems alone, he co-authored fifty articles with more than twenty different collaborators.« less
Higher modes of the Orr-Sommerfeld problem for boundary layer flows
NASA Technical Reports Server (NTRS)
Lakin, W. D.; Grosch, C. E.
1983-01-01
The discrete spectrum of the Orr-Sommerfeld problem of hydrodynamic stability for boundary layer flows in semi-infinite regions is examined. Related questions concerning the continuous spectrum are also addressed. Emphasis is placed on the stability problem for the Blasius boundary layer profile. A general theoretical result is given which proves that the discrete spectrum of the Orr-Sommerfeld problem for boundary layer profiles (U(y), 0,0) has only a finite number of discrete modes when U(y) has derivatives of all orders. Details are given of a highly accurate numerical technique based on collocation with splines for the calculation of stability characteristics. The technique includes replacement of 'outer' boundary conditions by asymptotic forms based on the proper large parameter in the stability problem. Implementation of the asymptotic boundary conditions is such that there is no need to make apriori distinctions between subcases of the discrete spectrum or between the discrete and continuous spectrums. Typical calculations for the usual Blasius problem are presented.
Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.
2011-01-01
Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029
Computer program for plotting and fairing wind-tunnel data
NASA Technical Reports Server (NTRS)
Morgan, H. L., Jr.
1983-01-01
A detailed description of the Langley computer program PLOTWD which plots and fairs experimental wind-tunnel data is presented. The program was written for use primarily on the Langley CDC computer and CALCOMP plotters. The fundamental operating features of the program are that the input data are read and written to a random-access file for use during program execution, that the data for a selected run can be sorted and edited to delete duplicate points, and that the data can be plotted and faired using tension splines, least-squares polynomial, or least-squares cubic-spline curves. The most noteworthy feature of the program is the simplicity of the user-supplied input requirements. Several subroutines are also included that can be used to draw grid lines, zero lines, axis scale values and lables, and legends. A detailed description of the program operational features and each sub-program are presented. The general application of the program is also discussed together with the input and output for two typical plot types. A listing of the program code, user-guide, and output description are presented in appendices. The program has been in use at Langley for several years and has proven to be both easy to use and versatile.
Interactive algebraic grid-generation technique
NASA Technical Reports Server (NTRS)
Smith, R. E.; Wiese, M. R.
1986-01-01
An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.
Optimization of freeform surfaces using intelligent deformation techniques for LED applications
NASA Astrophysics Data System (ADS)
Isaac, Annie Shalom; Neumann, Cornelius
2018-04-01
For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.
Milles, Julien; Zhu, Yue Min; Gimenez, Gérard; Guttmann, Charles R G; Magnin, Isabelle E
2007-03-01
A novel approach for correcting intensity nonuniformity in magnetic resonance imaging (MRI) is presented. This approach is based on the simultaneous use of spatial and gray-level histogram information. Spatial information about intensity nonuniformity is obtained using cubic B-spline smoothing. Gray-level histogram information of the image corrupted by intensity nonuniformity is exploited from a frequential point of view. The proposed correction method is illustrated using both physical phantom and human brain images. The results are consistent with theoretical prediction, and demonstrate a new way of dealing with intensity nonuniformity problems. They are all the more significant as the ground truth on intensity nonuniformity is unknown in clinical images.
NASA Technical Reports Server (NTRS)
Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.
1991-01-01
A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).
Numerical Manifold Method for the Forced Vibration of Thin Plates during Bending
Jun, Ding; Song, Chen; Wei-Bin, Wen; Shao-Ming, Luo; Xia, Huang
2014-01-01
A novel numerical manifold method was derived from the cubic B-spline basis function. The new interpolation function is characterized by high-order coordination at the boundary of a manifold element. The linear elastic-dynamic equation used to solve the bending vibration of thin plates was derived according to the principle of minimum instantaneous potential energy. The method for the initialization of the dynamic equation and its solution process were provided. Moreover, the analysis showed that the calculated stiffness matrix exhibited favorable performance. Numerical results showed that the generalized degrees of freedom were significantly fewer and that the calculation accuracy was higher for the manifold method than for the conventional finite element method. PMID:24883403
LQR Control of Thin Shell Dynamics: Formulation and Numerical Implementation
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1997-01-01
A PDE-based feedback control method for thin cylindrical shells with surface-mounted piezoceramic actuators is presented. Donnell-Mushtari equations modified to incorporate both passive and active piezoceramic patch contributions are used to model the system dynamics. The well-posedness of this model and the associated LQR problem with an unbounded input operator are established through analytic semigroup theory. The model is discretized using a Galerkin expansion with basis functions constructed from Fourier polynomials tensored with cubic splines, and convergence criteria for the associated approximate LQR problem are established. The effectiveness of the method for attenuating the coupled longitudinal, circumferential and transverse shell displacements is illustrated through a set of numerical examples.
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
Balk, S N; Schoenaker, D A J M; Mishra, G D; Toeller, M; Chaturvedi, N; Fuller, J H; Soedamah-Muthu, S S
2016-02-01
Diet and lifestyle advice for type 1 diabetes (T1DM) patients is based on little evidence and putative effects on glycaemic control. Therefore, we investigated the longitudinal relation between dietary and lifestyle variables and HbA1c levels in patients with type 1 diabetes. A 7-year prospective cohort analysis was performed in 1659 T1DM patients (52% males, mean age 32.5 years) participating in the EURODIAB Prospective Complications Study. Baseline dietary intake was assessed by 3- day records and physical activity, smoking status and alcohol intake by questionnaires. HbA1c during follow-up was centrally assessed by immunoassay. Analysis of variance (ANOVA) and restricted cubic spline regression analyses were performed to assess dose-response associations between diet and lifestyle variables and HbA1c levels, adjusted for age, sex, lifestyle and body composition measures, baseline HbA1c, medication use and severe hypoglycaemic attacks. Mean follow-up of our study population was 6.8 (s.d. 0.6) years. Mean HbA1c level was 8.25% (s.d. 1.85) (or 66.6 mmol/mol) at baseline and 8.27% (s.d. 1.44) at follow-up. Physical activity, smoking status and alcohol intake were not associated with HbA1c at follow-up in multivariable ANOVA models. Baseline intake below the median of vegetable protein (<29 g/day) and dietary fibre (<18 g/day) was associated with higher HbA1c levels. Restricted cubic splines showed nonlinear associations with HbA1c levels for vegetable protein (P (nonlinear)=0.008) and total dietary fibre (P (nonlinear)=0.0009). This study suggests that low intake of vegetable protein and dietary fibre are associated with worse glycaemic control in type 1 diabetes.
Body mass index in relation to serum prostate-specific antigen levels and prostate cancer risk.
Bonn, Stephanie E; Sjölander, Arvid; Tillander, Annika; Wiklund, Fredrik; Grönberg, Henrik; Bälter, Katarina
2016-07-01
High Body mass index (BMI) has been directly associated with risk of aggressive or fatal prostate cancer. One possible explanation may be an effect of BMI on serum levels of prostate-specific antigen (PSA). To study the association between BMI and serum PSA as well as prostate cancer risk, a large cohort of men without prostate cancer at baseline was followed prospectively for prostate cancer diagnoses until 2015. Serum PSA and BMI were assessed among 15,827 men at baseline in 2010-2012. During follow-up, 735 men were diagnosed with prostate cancer with 282 (38.4%) classified as high-grade cancers. Multivariable linear regression models and natural cubic linear regression splines were fitted for analyses of BMI and log-PSA. For risk analysis, Cox proportional hazards regression models were used to estimate hazard ratios (HR) and 95% confidence intervals (CI) and natural cubic Cox regression splines producing standardized cancer-free probabilities were fitted. Results showed that baseline Serum PSA decreased by 1.6% (95% CI: -2.1 to -1.1) with every one unit increase in BMI. Statistically significant decreases of 3.7, 11.7 and 32.3% were seen for increasing BMI-categories of 25 < 30, 30 < 35 and ≥35 kg/m(2), respectively, compared to the reference (18.5 < 25 kg/m(2)). No statistically significant associations were seen between BMI and prostate cancer risk although results were indicative of a positive association to incidence rates of high-grade disease and an inverse association to incidence of low-grade disease. However, findings regarding risk are limited by the short follow-up time. In conclusion, BMI was inversely associated to PSA-levels. BMI should be taken into consideration when referring men to a prostate biopsy based on serum PSA-levels. © 2016 UICC.
Cardinal, Thiane Ristow; Vigo, Alvaro; Duncan, Bruce Bartholow; Matos, Sheila Maria Alvim; da Fonseca, Maria de Jesus Mendes; Barreto, Sandhi Maria; Schmidt, Maria Inês
2018-01-01
Waist circumference (WC) has been incorporated in the definition of the metabolic syndrome (MetS) but the exact WC cut-off points across populations are not clear. The Joint Interim Statement (JIS) suggested possible cut-offs to different populations and ethnic groups. However, the adequacy of these cut-offs to Brazilian adults has been scarcely investigated. The objective of the study is to evaluate possible WC thresholds to be used in the definition of MetS using data from the Longitudinal Study of Adult Health (ELSA-Brasil), a multicenter cohort study of civil servants (35-74 years old) of six Brazilian cities. We analyzed baseline data from 14,893 participants (6772 men and 8121 women). A MetS was defined according to the JIS criteria, but excluding WC and thus requiring 2 of the 4 remaining elements. We used restricted cubic spline regression to graph the relationship between WC and MetS. We identified optimal cut-off points which maximized joint sensitivity and specificity (Youden's index) from Receiver Operator Characteristic Curves. We also estimated the C-statistics using logistic regression. We found no apparent threshold for WC in restricted cubic spline plots. Optimal cut-off for men was 92 cm (2 cm lower than that recommended by JIS for Caucasian/Europids or Sub-Saharan African men), but 2 cm higher than that recommended for ethnic Central and South American. For women, optimal cut-off was 86, 6 cm higher than that recommended for Caucasian/Europids and ethnic Central and South American. Optimal cut-offs did not very across age groups and most common race/color categories (except for Asian men, 87 cm). Sex-specific cut-offs for WC recommended by JIS differ from optimal cut-offs we found for adult men and women of Brazil´s most common ethnic groups.
Gomadam, Pallavi; Shah, Amit; Qureshi, Waqas; Yeboah, Phyllis N; Freedman, Barry I; Bowden, Donald; Soliman, Elsayed Z; Yeboah, Joseph
2018-01-01
We examined the associations between blood pressure indices (SBP, DBP, mean arterial pressure and pulse pressure) and cardiovascular disease (CVD) mortality among persons with or without diabetes mellitus (NON-DM) in a multiethnic cohort. We included 17 650 participants from National Health and Nutrition Examination Survey III and 1439 participants from Diabetes Heart Study (total n = 19 089, 16.3% had diabetes mellitus, mean age 48.5 years, 44.4% white, 27.1% black, 28.5% other race, 54.4% women). Cox proportional hazard, cubic spline and area under the curve analyses were used to assess the associations. CVD death was ascertained via social security registry or the National Death Index. After a mean (SD) of 16.2 (6.1) years of follow-up, 17.9% of diabetes mellitus and 8.8% of those NON-DM died of CVD. Diabetes mellitus was associated with an increased risk of CVD death [hazard ratio (95% confidence interval): 1.50 (1.25-1.82)]. One SD increase in SBP was significantly associated with CVD mortality in NON-DM [1.28 (1.18-1.39)] but not diabetes mellitus [1.04 (0.88-1.23)] in the full Cox models. Adjusted cubic spline analysis showed significant nonlinear but different association between SBP and CVD mortality among diabetes mellitus (U-shaped) and NON-DM (J-shaped). The C-statistics of our full model in NON-DM and diabetes mellitus were (0.888 vs. 0.735, P < 0.001). SBP showed a trend toward improving C statistics in NON-DM but not diabetes mellitus. The association between SBP and CVD mortality risk is nonlinear but different in diabetes mellitus (U-shaped) and NON-DM (J-shaped), explaining why aggressive blood pressure lowering may have different outcomes in these two groups.
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.
NASA Astrophysics Data System (ADS)
Wang, Jing; Qi, Zhaohui; Wang, Gang
2017-10-01
The dynamic analysis of cable-pulley systems is investigated in this paper, where the time-varying length characteristic of the cable as well as the coupling motion between the cable and the pulleys are considered. The dynamic model for cable-pulley systems are presented based on the principle of virtual power. Firstly, the cubic spline interpolation is adopted for modeling the flexible cable elements and the virtual 1powers of tensile strain, inertia and gravity forces on the cable are formulated. Then, the coupled motions between the cable and the movable or fixed pulley are described by the input and output contact points, based on the no-slip assumption and the spatial description. The virtual powers of inertia, gravity and applied forces on the contact segment of the cable, the movable and fixed pulleys are formulated. In particular, the internal node degrees of freedom of spline cable elements are reduced, which results in that only the independent description parameters of the nodes connected to the pulleys are included in the final governing dynamic equations. At last, two cable-pulley lifting mechanisms are considered as demonstrative application examples where the vibration of the lifting process is investigated. The comparison with ADAMS models is given to prove the validity of the proposed method.
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane
2011-01-01
Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223
Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon
2004-01-01
This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.
A New Multifunctional Sensor for Measuring Concentrations of Ternary Solution
NASA Astrophysics Data System (ADS)
Wei, Guo; Shida, Katsunori
This paper presents a multifunctional sensor with novel structure, which is capable of directly sensing temperature and two physical parameters of solutions, namely ultrasonic velocity and conductivity. By combined measurement of these three measurable parameters, the concentrations of various components in a ternary solution can be simultaneously determined. The structure and operation principle of the sensor are described, and a regression algorithm based on natural cubic spline interpolation and the least square method is adopted to estimate the concentrations. The performances of the proposed sensor are experimentally tested by the use of ternary aqueous solution of sodium chloride and sucrose, which is widely involved in food and beverage industries. This sensor could prove valuable as a process control sensor in industry fields.
Moving magnets in a micromagnetic finite-difference framework
NASA Astrophysics Data System (ADS)
Rissanen, Ilari; Laurson, Lasse
2018-05-01
We present a method and an implementation for smooth linear motion in a finite-difference-based micromagnetic simulation code, to be used in simulating magnetic friction and other phenomena involving moving microscale magnets. Our aim is to accurately simulate the magnetization dynamics and relative motion of magnets while retaining high computational speed. To this end, we combine techniques for fast scalar potential calculation and cubic b-spline interpolation, parallelizing them on a graphics processing unit (GPU). The implementation also includes the possibility of explicitly simulating eddy currents in the case of conducting magnets. We test our implementation by providing numerical examples of stick-slip motion of thin films pulled by a spring and the effect of eddy currents on the switching time of magnetic nanocubes.
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.
An experimental trip to the Calculus of Variations
NASA Astrophysics Data System (ADS)
Arroyo, Josu
2008-04-01
This paper presents a collection of experiments in the Calculus of Variations. The implementation of the Gradient Descent algorithm built on cubic-splines acting as "numerically friendly" elementary functions, give us ways to solve variational problems by constructing the solution. It wins a pragmatic point of view: one gets solutions sometimes as fast as possible, sometimes as close as possible to the true solutions. The balance speed/precision is not always easy to achieve. Starting from the most well-known, classic or historical formulation of a variational problem, section 2 describes briefly the bridge between theoretical and computational formulations. The next sections show the results of several kind of experiments; from the most basics, as those about geodesics, to the most complex, as those about vesicles.
Fitting ordinary differential equations to short time course data.
Brewer, Daniel; Barenco, Martino; Callard, Robin; Hubank, Michael; Stark, Jaroslav
2008-02-28
Ordinary differential equations (ODEs) are widely used to model many systems in physics, chemistry, engineering and biology. Often one wants to compare such equations with observed time course data, and use this to estimate parameters. Surprisingly, practical algorithms for doing this are relatively poorly developed, particularly in comparison with the sophistication of numerical methods for solving both initial and boundary value problems for differential equations, and for locating and analysing bifurcations. A lack of good numerical fitting methods is particularly problematic in the context of systems biology where only a handful of time points may be available. In this paper, we present a survey of existing algorithms and describe the main approaches. We also introduce and evaluate a new efficient technique for estimating ODEs linear in parameters particularly suited to situations where noise levels are high and the number of data points is low. It employs a spline-based collocation scheme and alternates linear least squares minimization steps with repeated estimates of the noise-free values of the variables. This is reminiscent of expectation-maximization methods widely used for problems with nuisance parameters or missing data.
NASA Astrophysics Data System (ADS)
Parks, P. B.; Ishizaki, Ryuichi
2000-10-01
In order to clarify the structure of the ablation flow, 2D simulation is carried out with a fluid code solving temporal evolution of MHD equations. The code includes electrostatic sheath effect at the cloud interface.(P.B. Parks et al.), Plasma Phys. Contr. Fusion 38, 571 (1996). An Eulerian cylindrical coordinate system (r,z) is used with z in a spherical pellet. The code uses the Cubic-Interpolated Psudoparticle (CIP) method(H. Takewaki and T. Yabe, J. Comput. Phys. 70), 355 (1987). that divides the fluid equations into non-advection and advection phases. The most essential element of the CIP method is in calculation of the advection phase. In this phase, a cubic interpolated spatial profile is shifted in space according to the total derivative equations, similarly to a particle scheme. Since the profile is interpolated by using the value and the spatial derivative value at each grid point, there is no numerical oscillation in space, that often appears in conventional spline interpolation. A free boundary condition is used in the code. The possibility of a stationary shock will also be shown in the presentation because the supersonic ablation flow across the magnetic field is impeded.
An extended UTD analysis for the scattering and diffraction from cubic polynomial strips
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1993-01-01
Spline and polynomial type surfaces are commonly used in high frequency modeling of complex structures such as aircraft, ships, reflectors, etc. It is therefore of interest to develop an efficient and accurate solution to describe the scattered fields from such surfaces. An extended Uniform Geometrical Theory of Diffraction (UTD) solution for the scattering and diffraction from perfectly conducting cubic polynomial strips is derived and involves the incomplete Airy integrals as canonical functions. This new solution is universal in nature and can be used to effectively describe the scattered fields from flat, strictly concave or convex, and concave convex boundaries containing edges. The classic UTD solution fails to describe the more complicated field behavior associated with higher order phase catastrophes and therefore a new set of uniform reflection and first-order edge diffraction coefficients is derived. Also, an additional diffraction coefficient associated with a zero-curvature (inflection) point is presented. Higher order effects such as double edge diffraction, creeping waves, and whispering gallery modes are not examined. The extended UTD solution is independent of the scatterer size and also provides useful physical insight into the various scattering and diffraction processes. Its accuracy is confirmed via comparison with some reference moment method results.
ERIC Educational Resources Information Center
Webb, Stuart; Kagimoto, Eve
2011-01-01
This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…
NASA Technical Reports Server (NTRS)
Zhang, Zhimin; Tomlinson, John; Martin, Clyde
1994-01-01
In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.
NASA Astrophysics Data System (ADS)
Lu, Shih-Yuan; Yen, Yi-Ming
2002-02-01
A first-passage scheme is devised to determine the overall rate constant of suspensions under the non-diffusion-limited condition. The original first-passage scheme developed for diffusion-limited processes is modified to account for the finite incorporation rate at the inclusion surface by using a concept of the nonzero survival probability of the diffusing entity at entity-inclusion encounters. This nonzero survival probability is obtained from solving a relevant boundary value problem. The new first-passage scheme is validated by an excellent agreement between overall rate constant results from the present development and from an accurate boundary collocation calculation for the three common spherical arrays [J. Chem. Phys. 109, 4985 (1998)], namely simple cubic, body-centered cubic, and face-centered cubic arrays, for a wide range of P and f. Here, P is a dimensionless quantity characterizing the relative rate of diffusion versus surface incorporation, and f is the volume fraction of the inclusion. The scheme is further applied to random spherical suspensions and to investigate the effect of inclusion coagulation on overall rate constants. It is found that randomness in inclusion arrangement tends to lower the overall rate constant for f up to the near close-packing value of the regular arrays because of the inclusion screening effect. This screening effect turns stronger for regular arrays when f is near and above the close-packing value of the regular arrays, and consequently the overall rate constant of the random array exceeds that of the regular array. Inclusion coagulation too induces the inclusion screening effect, and leads to lower overall rate constants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T
2015-01-01
Turbine discharges at low-head short converging intakes are difficult to measure accurately. The proximity of the measurement section to the intake entrance admits large uncertainties related to asymmetry of the velocity profile, swirl, and turbulence. Existing turbine performance codes [10, 24] do not address this special case and published literature is largely silent on rigorous evaluation of uncertainties associated with this measurement context. The American Society of Mechanical Engineers (ASME) Committee investigated the use of Acoustic transit time (ATT), Acoustic scintillation (AS), and Current meter (CM) in a short converging intake at the Kootenay Canal Generating Station in 2009. Basedmore » on their findings, a standardized uncertainty analysis (UA) framework for velocity-area method (specifically for CM measurements) is presented in this paper given the fact that CM is still the most fundamental and common type of measurement system. Typical sources of systematic and random errors associated with CM measurements are investigated, and the major sources of uncertainties associated with turbulence and velocity fluctuations, numerical velocity integration technique (bi-cubic spline), and the number and placement of current meters are being considered for an evaluation. Since the velocity measurements in a short converging intake are associated with complex nonlinear and time varying uncertainties (e.g., Reynolds stress in fluid dynamics), simply applying the law of propagation of uncertainty is known to overestimate the measurement variance while the Monte Carlo method does not. Therefore, a pseudo-Monte Carlo simulation method (random flow generation technique [8]) which was initially developed for the purpose of establishing upstream or initial conditions in the Large-Eddy Simulation (LES) and the Direct Numerical Simulation (DNS) is used to statistically determine uncertainties associated with turbulence and velocity fluctuations. This technique is then combined with a bi-cubic spline interpolation method which converts point velocities into a continuous velocity distribution over the measurement domain. Subsequently the number and placement of current meters are simulated to investigate the accuracy of the estimated flow rates using the numerical velocity-area integration method outlined in ISO 3354 [12]. The authors herein consider that statistics on generated flow rates processed with bi-cubic interpolation and sensor simulations are the combined uncertainties which already accounted for the effects of all those three uncertainty sources. A preliminary analysis based on the current meter data obtained through an upgrade acceptance test of a single unit located in a mainstem plant has been presented.« less
The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners
ERIC Educational Resources Information Center
Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd
2011-01-01
An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge…
Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers
ERIC Educational Resources Information Center
Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting
2016-01-01
The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…
Zhou, Shulan; Li, Zheng; Xie, Daiqian; Lin, Shi Ying; Guo, Hua
2009-05-14
A global potential-energy surface for the first excited electronic state of NH(2)(A(2)A(')) has been constructed by three-dimensional cubic spline interpolation of more than 20,000 ab initio points, which were calculated at the multireference configuration-interaction level with the Davidson correction using the augmented correlation-consistent polarized valence quadruple-zeta basis set. The (J=0) vibrational energy levels for the ground (X(2)A(")) and excited (A(2)A(')) electronic states of NH(2) were calculated on our potential-energy surfaces with the diagonal Renner-Teller terms. The results show a good agreement with the experimental vibrational frequencies of NH(2) and its isotopomers.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
Aircraft geometry verification with enhanced computer generated displays
NASA Technical Reports Server (NTRS)
Cozzolongo, J. V.
1982-01-01
A method for visual verification of aerodynamic geometries using computer generated, color shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color shaded display is presented. The results include examples of color shaded displays, which are contrasted with wire frame type displays. The examples also show the use of mapped surface pressures in terms of color shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft.
ADMAP (automatic data manipulation program)
NASA Technical Reports Server (NTRS)
Mann, F. I.
1971-01-01
Instructions are presented on the use of ADMAP, (automatic data manipulation program) an aerospace data manipulation computer program. The program was developed to aid in processing, reducing, plotting, and publishing electric propulsion trajectory data generated by the low thrust optimization program, HILTOP. The program has the option of generating SC4020 electric plots, and therefore requires the SC4020 routines to be available at excution time (even if not used). Several general routines are present, including a cubic spline interpolation routine, electric plotter dash line drawing routine, and single parameter and double parameter sorting routines. Many routines are tailored for the manipulation and plotting of electric propulsion data, including an automatic scale selection routine, an automatic curve labelling routine, and an automatic graph titling routine. Data are accepted from either punched cards or magnetic tape.
High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform
Chan, Kenny K. H.; Tang, Shuo
2010-01-01
The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551
Corpus-Aided Business English Collocation Pedagogy: An Empirical Study in Chinese EFL Learners
ERIC Educational Resources Information Center
Chen, Lidan
2017-01-01
This study reports an empirical study of an explicit instruction of corpus-aided Business English collocations and verifies its effectiveness in improving learners' collocation awareness and learner autonomy, as a result of which is significant improvement of learners' collocation competence. An eight-week instruction in keywords' collocations,…
Perceptions on L2 Lexical Collocation Translation with a Focus on English-Arabic
ERIC Educational Resources Information Center
Alqaed, Mai Abdullah
2017-01-01
This paper aims to shed light on recent research concerning translating English-Arabic lexical collocations. It begins with a brief overview of English and Arabic lexical collocations with reference to specialized dictionaries. Research views on translating lexical collocations are presented, with the focus on English-Arabic collocations. These…
NASA Astrophysics Data System (ADS)
Wei, David Wei; Deegan, Anthony J.; Wang, Ruikang K.
2017-06-01
When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.
Wei, David Wei; Deegan, Anthony J; Wang, Ruikang K
2017-06-01
When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.
BasinVis 1.0: A MATLAB®-based program for sedimentary basin subsidence analysis and visualization
NASA Astrophysics Data System (ADS)
Lee, Eun Young; Novotny, Johannes; Wagreich, Michael
2016-06-01
Stratigraphic and structural mapping is important to understand the internal structure of sedimentary basins. Subsidence analysis provides significant insights for basin evolution. We designed a new software package to process and visualize stratigraphic setting and subsidence evolution of sedimentary basins from well data. BasinVis 1.0 is implemented in MATLAB®, a multi-paradigm numerical computing environment, and employs two numerical methods: interpolation and subsidence analysis. Five different interpolation methods (linear, natural, cubic spline, Kriging, and thin-plate spline) are provided in this program for surface modeling. The subsidence analysis consists of decompaction and backstripping techniques. BasinVis 1.0 incorporates five main processing steps; (1) setup (study area and stratigraphic units), (2) loading well data, (3) stratigraphic setting visualization, (4) subsidence parameter input, and (5) subsidence analysis and visualization. For in-depth analysis, our software provides cross-section and dip-slip fault backstripping tools. The graphical user interface guides users through the workflow and provides tools to analyze and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the results using the full range of available plot options in MATLAB. We demonstrate all functions in a case study of Miocene sediment in the central Vienna Basin.
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
1993-01-01
A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.
ERIC Educational Resources Information Center
Miyakoshi, Tomoko
2009-01-01
Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an…
Multivariate Spline Algorithms for CAGD
NASA Technical Reports Server (NTRS)
Boehm, W.
1985-01-01
Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.
Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G
2011-06-28
We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.
Noorwali, Essra A; Cade, Janet E; Burley, Victoria J; Hardie, Laura J
2018-04-27
There is increasing evidence to suggest an association between sleep and diet. The aim of the present study was to examine the association between sleep duration and fruit/vegetable (FV) intakes and their associated biomarkers in UK adults. Cross-sectional. Data from The National Diet and Nutrition Survey. 1612 adults aged 19-65 years were included, pregnant/breastfeeding women were excluded from the analyses. Sleep duration was assessed by self-report, and diet was assessed by 4-day food diaries, disaggregation of foods containing FV into their components was conducted to determine total FV intakes. Sleep duration was divided into: short (<7 hours/day), reference (7-8 hours/day) and long (>8 hours/day) sleep periods. Multiple regression adjusting for confounders was used for analyses where sleep duration was the exposure and FV intakes and their associated biomarkers were the outcomes. Restricted cubic spline models were developed to explore potential non-linear associations. In adjusted models, long sleepers (LS) consumed on average 28 (95% CI -50 to -6, p=0.01) g/day less of total FV compared to reference sleepers (RS), whereas short sleepers (SS) consumed 24 g/day less (95% CI -42 to -6, p=0.006) and had lower levels of FV biomarkers (total carotenoids, β-carotene and lycopene) compared to RS. Restricted cubic spline models showed that the association between sleep duration and FV intakes was non-linear (p<0.001) with RS having the highest intakes compared to SS and LS. The associations between sleep duration and plasma total carotenoids (p=0.0035), plasma vitamin C (p=0.009) and lycopene (p<0.001) were non-linear with RS having the highest levels. These findings show a link between sleep duration and FV consumption. This may have important implications for lifestyle and behavioural change policy. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Poon, Woei Bing; Fook-Chong, Stephanie M C; Ler, Grace Y L; Loh, Zhi Wen; Yeo, Cheo Lian
2014-06-01
Both gestation and birth weight have significant impact on mortality and morbidity in newborn infants. Nomograms at birth allow classification of infants into small for gestational age (SGA) and large for gestational age (LGA) categories, for risk stratification and more intensive monitoring. To date, the growth charts for preterm newborn infants in Singapore are based on the Fenton growth charts, which are constructed based on combining data from various Western growth cohorts. Hence, we aim to create Singapore nomograms for birth weight, length and head circumference at birth, which would reflect the norms and challenges faced by local infants. Growth parameters of all babies born or admitted to our unit from 2001 to 2012 were retrieved. Following exclusion of outliers, nomograms for 3 percentiles of 10th, 50th, and 90th were generated for the gestational age (GA) ranges of 25 to 42 weeks using quantile regression (QR) combined with the use of restricted cubic splines. Various polynomial models (second to third degrees) were investigated for suitability of fit. The optimum QR model was found to be a third degree polynomial with a single knotted cubic spline in the mid-point of the GA range, at 33.5 weeks. Check for goodness of fit was done by visual inspection first. Next, check was performed to ensure the correct proportion: 10% of all cases fall above the upper 90th percentile and 10% fall below the lower 10th percentile. Furthermore, an alternative formula-based method of nomogram construction, using mean, standard deviation (SD) and assumption of normality at each gestational age, was used for counterchecking. A total of 13,403 newborns were included in the analysis. The new infant-foetal growth charts with respect to birth weight, heel-crown length and occipitofrontal circumference from 25 to 42 weeks gestations with the 10th, 50th and 90th were presented. Nomograms for birth weight, length and head circumference at birth had significant impact on neonatal practice and validation of the Singapore birth nomograms against Fenton growth charts showed better sensitivity and comparable specificity, positive and negative predictive values.
Egeland, Grace M; Skurtveit, Svetlana; Sakshaug, Solveig; Daltveit, Anne Kjersti; Vikse, Bjørn E; Haugen, Margaretha
2017-09-01
Background: Low dietary calcium intake may be a risk factor for hypertension, but studies conflict. Objective: We evaluated the ability to predict hypertension within 10 y after delivery based on calcium intake during midpregnancy. Methods: The Norwegian Mother and Child Cohort Study of women delivering in 2004-2009 was linked to the Norwegian Prescription Database (2004-2013) to ascertain antihypertensive medication usage >90 d after delivery. Women with hypertension before pregnancy were excluded, leaving 60,027 mothers for analyses. Age and energy-adjusted cubic splines evaluated dose-response curves, and Cox proportional hazard analyses evaluated HR and 95% CIs by calcium quartiles adjusting for 7 covariates. Analyses were stratified by gestational hypertension and by sodium-to-potassium intake ratio (<0.76 compared with ≥0.76). Results: Participants had a mean ± SD age of 30.5 ± 4.6 y, a body mass index (in kg/m 2 ) of 24.0 ± 4.3 before pregnancy, and a mean follow-up duration of 7.1 ± 1.6 y. Cubic spline graphs identified a threshold effect of low calcium intake only within the range of dietary inadequacy related to increased risk. The lowest calcium quartile (≤738 mg/d; median: 588 mg/d), relative to the highest quartile (≥1254 mg/d), had an HR for hypertension of 1.34 (95% CI: 1.05, 1.70) among women who were normotensive during pregnancy, and an HR of 1.62 (95% CI: 1.14, 2.35) among women who had gestational hypertension, after adjusting for covariates. Women with gestational hypertension, who were in the lowest quartile of calcium intake, and who had a high sodium-to-potassium intake ratio had a risk of hypertension more than double that of their counterparts with a calcium intake in the highest quartile. Results were attenuated by adjusting for covariates (HR: 1.92; 95% CI: 1.09, 3.39). Conclusions: The results suggest that low dietary calcium intake may be a risk factor or risk marker for the development of hypertension, particularly for women with a history of gestational hypertension. © 2017 American Society for Nutrition.
Are Nonadjacent Collocations Processed Faster?
ERIC Educational Resources Information Center
Vilkaite, Laura
2016-01-01
Numerous studies have shown processing advantages for collocations, but they only investigated processing of adjacent collocations (e.g., "provide information"). However, in naturally occurring language, nonadjacent collocations ("provide" some of the "information") are equally, if not more frequent. This raises the…
Manual for a workstation-based generic flight simulation program (LaRCsim), version 1.4
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce
1995-01-01
LaRCsim is a set of ANSI C routines that implement a full set of equations of motion for a rigid-body aircraft in atmospheric and low-earth orbital flight, suitable for pilot-in-the-loop simulations on a workstation-class computer. All six rigid-body degrees of freedom are modeled. The modules provided include calculations of the typical aircraft rigid-body simulation variables, earth geodesy, gravity and atmospheric models, and support several data recording options. Features/limitations of the current version include English units of measure, a 1962 atmosphere model in cubic spline function lookup form, ranging from sea level to 75,000 feet, rotating oblate spheroidal earth model, with aircraft C.G. coordinates in both geocentric and geodetic axes. Angular integrations are done using quaternion state variables Vehicle X-Z symmetry is assumed.
Automated CFD Database Generation for a 2nd Generation Glide-Back-Booster
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Rogers, Stuart E.; Aftosmis, Michael J.; Pandya, Shishir A.; Ahmad, Jasim U.; Tejmil, Edward
2003-01-01
A new software tool, AeroDB, is used to compute thousands of Euler and Navier-Stokes solutions for a 2nd generation glide-back booster in one week. The solution process exploits a common job-submission grid environment using 13 computers located at 4 different geographical sites. Process automation and web-based access to the database greatly reduces the user workload, removing much of the tedium and tendency for user input errors. The database consists of forces, moments, and solution files obtained by varying the Mach number, angle of attack, and sideslip angle. The forces and moments compare well with experimental data. Stability derivatives are also computed using a monotone cubic spline procedure. Flow visualization and three-dimensional surface plots are used to interpret and characterize the nature of computed flow fields.
Modeling the stock price returns volatility using GARCH(1,1) in some Indonesia stock prices
NASA Astrophysics Data System (ADS)
Awalludin, S. A.; Ulfah, S.; Soro, S.
2018-01-01
In the financial field, volatility is one of the key variables to make an appropriate decision. Moreover, modeling volatility is needed in derivative pricing, risk management, and portfolio management. For this reason, this study presented a widely used volatility model so-called GARCH(1,1) for estimating the volatility of daily returns of stock prices of Indonesia from July 2007 to September 2015. The returns can be obtained from stock price by differencing log of the price from one day to the next. Parameters of the model were estimated by Maximum Likelihood Estimation. After obtaining the volatility, natural cubic spline was employed to study the behaviour of the volatility over the period. The result shows that GARCH(1,1) indicate evidence of volatility clustering in the returns of some Indonesia stock prices.
Empirical wind model for the middle and lower atmosphere. Part 2: Local time variations
NASA Technical Reports Server (NTRS)
Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Clark, R. R.; Franke, S. J.; Fraser, G. J.; Tsuda, T.; Vial, F.
1993-01-01
The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Local time variations in the mesosphere are derived from rocket soundings, incoherent scatter radar, MF radar, and meteor radar. Low-order spherical harmonics and Fourier series are used to describe these variations as a function of latitude and day of year with cubic spline interpolation in altitude. The model represents a smoothed compromise between the original data sources. Although agreement between various data sources is generally good, some systematic differences are noted. Overall root mean square differences between measured and model tidal components are on the order of 5 to 10 m/s.
Spline approximation, Part 1: Basic methodology
NASA Astrophysics Data System (ADS)
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea
NASA Astrophysics Data System (ADS)
Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan
2016-04-01
Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.
Collocations: A Neglected Variable in EFL.
ERIC Educational Resources Information Center
Farghal, Mohammed; Obiedat, Hussein
1995-01-01
Addresses the issue of collocations as an important and neglected variable in English-as-a-Foreign-Language classes. Two questionnaires, in English and Arabic, involving common collocations relating to food, color, and weather were administered to English majors and English language teachers. Results show both groups deficient in collocations. (36…
Code of Federal Regulations, 2010 CFR
2010-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Examining Second Language Receptive Knowledge of Collocation and Factors That Affect Learning
ERIC Educational Resources Information Center
Nguyen, Thi My Hang; Webb, Stuart
2017-01-01
This study investigated Vietnamese EFL learners' knowledge of verb-noun and adjective-noun collocations at the first three 1,000 word frequency levels, and the extent to which five factors (node word frequency, collocation frequency, mutual information score, congruency, and part of speech) predicted receptive knowledge of collocation. Knowledge…
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring
NASA Astrophysics Data System (ADS)
Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin
2015-08-01
In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.
NASA Technical Reports Server (NTRS)
Korte, J. J.; Auslender, A. H.
1993-01-01
A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.
Racism in the form of micro aggressions and the risk of preterm birth among Black women
Slaughter-Acey, Jaime C.; Sealy-Jefferson, Shawnita; Helmkamp, Laura; Caldwell, Cleopatra H; Osypuk, Theresa L.; Platt, Robert W.; Straughen, Jennifer K.; Dailey-Okezie, Rhonda K.; Abeysekara, Purni; Misra, Dawn P.
2015-01-01
Purpose This study sought to examine whether perceived interpersonal racism in the form of racial micro aggressions was associated with preterm birth (PTB) and whether the presence of depressive symptoms and perceived stress modified the association. Methods Data stem from a cohort of 1410 Black women residing in Metropolitan Detroit, Michigan enrolled into the Life-course Influences on Fetal Environments (LIFE) Study. The Daily Life Experiences of Racism and Bother (DLE-B) scale measured the frequency and perceived stressfulness of racial micro aggressions experienced during the past year. Severe past-week depressive symptomatology was measured by the Centers for Epidemiologic Studies-Depression scale (CES-D) dichotomized at ≥23. Restricted cubic splines were used to model non-linearity between perceived racism and PTB. We used the Perceived Stress Scale (PSS) to assess general stress perceptions. Results Stratified spline regression analysis demonstrated that among those with severe depressive symptoms, perceived racism was not associated with PTB. However, perceived racism was significantly associated with PTB among women with mild to moderate (CES-D score ≤22) depressive symptoms. Perceived racism was not associated with PTB among women with or without high amounts of perceived stress. Conclusions Our findings suggest that racism, at least in the form of racial micro aggressions, may not further impact a group already at high risk for PTB (those with severe depressive symptoms), but may increase the risk of PTB for women at lower baseline risk. PMID:26549132
Racism in the form of micro aggressions and the risk of preterm birth among black women.
Slaughter-Acey, Jaime C; Sealy-Jefferson, Shawnita; Helmkamp, Laura; Caldwell, Cleopatra H; Osypuk, Theresa L; Platt, Robert W; Straughen, Jennifer K; Dailey-Okezie, Rhonda K; Abeysekara, Purni; Misra, Dawn P
2016-01-01
This study sought to examine whether perceived interpersonal racism in the form of racial micro aggressions was associated with preterm birth (PTB) and whether the presence of depressive symptoms and perceived stress modified the association. Data stem from a cohort of 1410 black women residing in Metropolitan Detroit, Michigan, enrolled into the Life-course Influences on Fetal Environments (LIFE) study. The Daily Life Experiences of Racism and Bother (DLE-B) scale measured the frequency and perceived stressfulness of racial micro aggressions experienced during the past year. Severe past-week depressive symptomatology was measured by the Centers for Epidemiologic Studies-Depression scale (CES-D) dichotomized at ≥ 23. Restricted cubic splines were used to model nonlinearity between perceived racism and PTB. We used the Perceived Stress Scale to assess general stress perceptions. Stratified spline regression analysis demonstrated that among those with severe depressive symptoms, perceived racism was not associated with PTB. However, perceived racism was significantly associated with PTB among women with mild to moderate (CES-D score ≤ 22) depressive symptoms. Perceived racism was not associated with PTB among women with or without high amounts of perceived stress. Our findings suggest that racism, at least in the form of racial micro aggressions, may not further impact a group already at high risk for PTB (those with severe depressive symptoms), but may increase the risk of PTB for women at lower baseline risk. Copyright © 2016 Elsevier Inc. All rights reserved.
[Research on Kalman interpolation prediction model based on micro-region PM2.5 concentration].
Wang, Wei; Zheng, Bin; Chen, Binlin; An, Yaoming; Jiang, Xiaoming; Li, Zhangyong
2018-02-01
In recent years, the pollution problem of particulate matter, especially PM2.5, is becoming more and more serious, which has attracted many people's attention from all over the world. In this paper, a Kalman prediction model combined with cubic spline interpolation is proposed, which is applied to predict the concentration of PM2.5 in the micro-regional environment of campus, and to realize interpolation simulation diagram of concentration of PM2.5 and simulate the spatial distribution of PM2.5. The experiment data are based on the environmental information monitoring system which has been set up by our laboratory. And the predicted and actual values of PM2.5 concentration data have been checked by the way of Wilcoxon signed-rank test. We find that the value of bilateral progressive significance probability was 0.527, which is much greater than the significant level α = 0.05. The mean absolute error (MEA) of Kalman prediction model was 1.8 μg/m 3 , the average relative error (MER) was 6%, and the correlation coefficient R was 0.87. Thus, the Kalman prediction model has a better effect on the prediction of concentration of PM2.5 than those of the back propagation (BP) prediction and support vector machine (SVM) prediction. In addition, with the combination of Kalman prediction model and the spline interpolation method, the spatial distribution and local pollution characteristics of PM2.5 can be simulated.
English Learners' Knowledge of Prepositions: Collocational Knowledge or Knowledge Based on Meaning?
ERIC Educational Resources Information Center
Mueller, Charles M.
2011-01-01
Second language (L2) learners' successful performance in an L2 can be partly attributed to their knowledge of collocations. In some cases, this knowledge is accompanied by knowledge of the semantic and/or grammatical patterns that motivate the collocation. At other times, collocational knowledge may serve a compensatory role. To determine the…
Code of Federal Regulations, 2010 CFR
2010-10-01
... elements include, but are not limited to: (1) Physical collocation and virtual collocation at the premises... seeking a particular collocation arrangement, either physical or virtual, is entitled to a presumption... incumbent LEC shall be required to provide virtual collocation, except at points where the incumbent LEC...
ERIC Educational Resources Information Center
Wolter, Brent; Gyllstad, Henrik
2013-01-01
This study investigated the influence of frequency effects on the processing of congruent (i.e., having an equivalent first language [L1] construction) collocations and incongruent (i.e., not having an equivalent L1 construction) collocations in a second language (L2). An acceptability judgment task was administered to native and advanced…
Corpus-Based versus Traditional Learning of Collocations
ERIC Educational Resources Information Center
Daskalovska, Nina
2015-01-01
One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…
ERIC Educational Resources Information Center
Gablasova, Dana; Brezina, Vaclav; McEnery, Tony
2017-01-01
This article focuses on the use of collocations in language learning research (LLR). Collocations, as units of formulaic language, are becoming prominent in our understanding of language learning and use; however, while the number of corpus-based LLR studies of collocations is growing, there is still a need for a deeper understanding of factors…
"Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li+-benzene
NASA Astrophysics Data System (ADS)
D'Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.
2015-08-01
Quantum and anharmonic effects are investigated in (H2)2-Li+-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li+-benzene complex increases the ZPE of the system by 5.6 kJ mol-1 to 17.6 kJ mol-1. This ZPE is 42% of the total electronic binding energy of (H2)2-Li+-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li+-benzene is 7.7 kJ mol-1, compared to 12.4 kJ mol-1 for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li+ ion and are more confined in the θ coordinate than in H2-Li+-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li+-benzene PESs are developed. These use a modified Shepard interpolation for the Li+-benzene and H2-Li+-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li+ terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol-1. Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol-1 error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H2-H2 interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.
"Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li(+)-benzene.
D'Arcy, Jordan H; Kolmann, Stephen J; Jordan, Meredith J T
2015-08-21
Quantum and anharmonic effects are investigated in (H2)2-Li(+)-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li(+)-benzene complex increases the ZPE of the system by 5.6 kJ mol(-1) to 17.6 kJ mol(-1). This ZPE is 42% of the total electronic binding energy of (H2)2-Li(+)-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li(+)-benzene is 7.7 kJ mol(-1), compared to 12.4 kJ mol(-1) for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li(+) ion and are more confined in the θ coordinate than in H2-Li(+)-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li(+)-benzene PESs are developed. These use a modified Shepard interpolation for the Li(+)-benzene and H2-Li(+)-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li(+) terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol(-1). Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol(-1) error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H2-H2 interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.
Geometric and computer-aided spline hob modeling
NASA Astrophysics Data System (ADS)
Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.
2018-03-01
The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.
Microsoft C#.NET program and electromagnetic depth sounding for large loop source
NASA Astrophysics Data System (ADS)
Prabhakar Rao, K.; Ashok Babu, G.
2009-07-01
A program, in the C# (C Sharp) language with Microsoft.NET Framework, is developed to compute the normalized vertical magnetic field of a horizontal rectangular loop source placed on the surface of an n-layered earth. The field can be calculated either inside or outside the loop. Five C# classes with member functions in each class are, designed to compute the kernel, Hankel transform integral, coefficients for cubic spline interpolation between computed values and the normalized vertical magnetic field. The program computes the vertical magnetic field in the frequency domain using the integral expressions evaluated by a combination of straightforward numerical integration and the digital filter technique. The code utilizes different object-oriented programming (OOP) features. It finally computes the amplitude and phase of the normalized vertical magnetic field. The computed results are presented for geometric and parametric soundings. The code is developed in Microsoft.NET visual studio 2003 and uses various system class libraries.
Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin
2011-12-13
The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).
Fine-granularity inference and estimations to network traffic for SDN.
Jiang, Dingde; Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.
Fast and accurate Voronoi density gridding from Lagrangian hydrodynamics data
NASA Astrophysics Data System (ADS)
Petkova, Maya A.; Laibe, Guillaume; Bonnell, Ian A.
2018-01-01
Voronoi grids have been successfully used to represent density structures of gas in astronomical hydrodynamics simulations. While some codes are explicitly built around using a Voronoi grid, others, such as Smoothed Particle Hydrodynamics (SPH), use particle-based representations and can benefit from constructing a Voronoi grid for post-processing their output. So far, calculating the density of each Voronoi cell from SPH data has been done numerically, which is both slow and potentially inaccurate. This paper proposes an alternative analytic method, which is fast and accurate. We derive an expression for the integral of a cubic spline kernel over the volume of a Voronoi cell and link it to the density of the cell. Mass conservation is ensured rigorously by the procedure. The method can be applied more broadly to integrate a spherically symmetric polynomial function over the volume of a random polyhedron.
Conway, Sadie H.; Pompeii, Lisa A.; Roberts, Robert E.; Follis, Jack L.; Gimeno, David
2015-01-01
Objectives To examine the presence of a dose-response relationship between work hours and incident cardiovascular disease (CVD) in a representative sample of U.S. workers. Methods Retrospective cohort study of 1,926 individuals from the Panel Study of Income Dynamics (1986–2011) employed for at least 10 years. Restricted cubic spline regression was used to estimate the dose-response relationship of work hours with CVD. Results A dose-response relationship was observed in which an average workweek of 46 hours or more for at least 10 years was associated with increased risk of CVD. Compared to working 45 hours per week, working an additional 10 hours per week or more for at least 10 years increased CVD risk by at least 16%. Conclusions Working more than 45 work hours per week for at least 10 years may be an independent risk factor for CVD. PMID:26949870
Influence of compressibility on the Lagrangian statistics of vorticity-strain-rate interactions.
Danish, Mohammad; Sinha, Sawan Suman; Srinivasan, Balaji
2016-07-01
The objective of this study is to investigate the influence of compressibility on Lagrangian statistics of vorticity and strain-rate interactions. The Lagrangian statistics are extracted from "almost" time-continuous data sets of direct numerical simulations of compressible decaying isotropic turbulence by employing a cubic spline-based Lagrangian particle tracker. We study the influence of compressibility on Lagrangian statistics of alignment in terms of compressibility parameters-turbulent Mach number, normalized dilatation-rate, and flow topology. In comparison to incompressible turbulence, we observe that the presence of compressibility in a flow field weakens the alignment tendency of vorticity toward the largest strain-rate eigenvector. Based on the Lagrangian statistics of alignment conditioned on dilatation and topology, we find that the weakened tendency of alignment observed in compressible turbulence is because of a special group of fluid particles that have an initially negligible dilatation-rate and are associated with stable-focus-stretching topology.
NASA Technical Reports Server (NTRS)
Padavala, Satyasrinivas; Palazzolo, Alan B.; Vallely, Pat; Ryan, Steve
1994-01-01
An improved dynamic analysis for liquid annular seals with arbitrary profile based on a method, first proposed by Nelson and Nguyen, is presented. An improved first order solution that incorporates a continuous interpolation of perturbed quantities in the circumferential direction, is presented. The original method uses an approximation scheme for circumferential gradients, based on Fast Fourier Transforms (FFT). A simpler scheme based on cubic splines is found to be computationally more efficient with better convergence at higher eccentricities. A new approach of computing dynamic coefficients based on external specified load is introduced. This improved analysis is extended to account for arbitrarily varying seal profile in both axial and circumferential directions. An example case of an elliptical seal with varying degrees of axial curvature is analyzed. A case study based on actual operating clearances of an interstage seal of the Space Shuttle Main Engine High Pressure Oxygen Turbopump is presented.
NASA Astrophysics Data System (ADS)
Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang
2018-03-01
A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.
Trajectory generation for an on-road autonomous vehicle
NASA Astrophysics Data System (ADS)
Horst, John; Barbera, Anthony
2006-05-01
We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.
On the dynamics of jellyfish locomotion via 3D particle tracking velocimetry
NASA Astrophysics Data System (ADS)
Piper, Matthew; Kim, Jin-Tae; Chamorro, Leonardo P.
2016-11-01
The dynamics of jellyfish (Aurelia aurita) locomotion is experimentally studied via 3D particle tracking velocimetry. 3D locations of the bell tip are tracked over 1.5 cycles to describe the jellyfish path. Multiple positions of the jellyfish bell margin are initially tracked in 2D from four independent planes and individually projected in 3D based on the jellyfish path and geometrical properties of the setup. A cubic spline interpolation and the exponentially weighted moving average are used to estimate derived quantities, including velocity and acceleration of the jellyfish locomotion. We will discuss distinctive features of the jellyfish 3D motion at various swimming phases, and will provide insight on the 3D contraction and relaxation in terms of the locomotion, the steadiness of the bell margin eccentricity, and local Reynolds number based on the instantaneous mean diameter of the bell.
Fine-granularity inference and estimations to network traffic for SDN
Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. PMID:29718913
Model-independent partial wave analysis using a massively-parallel fitting framework
NASA Astrophysics Data System (ADS)
Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.
2017-10-01
The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.
Simulating the evolution of non-point source pollutants in a shallow water environment.
Yan, Min; Kahawita, Rene
2007-03-01
Non-point source pollution originating from surface applied chemicals in either liquid or solid form as part of agricultural activities, appears in the surface runoff caused by rainfall. The infiltration and transport of these pollutants has a significant impact on subsurface and riverine water quality. The present paper describes the development of a unified 2-D mathematical model incorporating individual models for infiltration, adsorption, solubility rate, advection and diffusion, which significantly improve the current practice on mathematical modeling of pollutant evolution in shallow water. The governing equations have been solved numerically using cubic spline integration. Experiments were conducted at the Hydrodynamics Laboratory of the Ecole Polytechnique de Montreal to validate the mathematical model. Good correspondence between the computed results and experimental data has been obtained. The model may be used to predict the ultimate fate of surface applied chemicals by evaluating the proportions that are dissolved, infiltrated into the subsurface or are washed off.
Adaptive guidance for an aero-assisted boost vehicle
NASA Astrophysics Data System (ADS)
Pamadi, Bandu N.; Taylor, Lawrence W., Jr.; Price, Douglas B.
An adaptive guidance system incorporating dynamic pressure constraint is studied for a single stage to low earth orbit (LEO) aero-assist booster with thrust gimbal angle as the control variable. To derive an adaptive guidance law, cubic spline functions are used to represent the ascent profile. The booster flight to LEO is divided into initial and terminal phases. In the initial phase, the ascent profile is continuously updated to maximize the performance of the boost vehicle enroute. A linear feedback control is used in the terminal phase to guide the aero-assisted booster onto the desired LEO. The computer simulation of the vehicle dynamics considers a rotating spherical earth, inverse square (Newtonian) gravity field and an exponential model for the earth's atmospheric density. This adaptive guidance algorithm is capable of handling large deviations in both atmospheric conditions and modeling uncertainties, while ensuring maximum booster performance.
The parametrization of radio source coordinates in VLBI and its impact on the CRF
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2016-04-01
Usually celestial radio sources in the celestial reference frame (CRF) catalog are divided in three categories: defining, special handling, and others. The defining sources are those used for the datum realization of the celestial reference frame, i.e. they are included in the No-Net-Rotation (NNR) constraints to maintain the axis orientation of the CRF, and are modeled with one set of totally constant coordinates. At the current level of precision, the choice of the defining sources has a significant effect on the coordinates. For the ICRF2 295 sources were chosen as defining sources, based on their geometrical distribution, statistical properties, and stability. The number of defining sources is a compromise between the reliability of the datum, which increases with the number of sources, and the noise which is introduced by each source. Thus, the optimal number of defining sources is a trade-off between reliability, geometry, and precision. In the ICRF2 only 39 of sources were sorted into the special handling group as they show large fluctuations in their position, therefore they are excluded from the NNR conditions and their positions are normally estimated for each VLBI session instead of as global parameters. All the remaining sources are classified as others. However, a large fraction of these unstable sources show other favorable characteristics, e.g. large flux density (brightness) and a long history of observations. Thus, it would prove advantageous including these sources into the NNR condition. However, the instability of these objects inhibit this. If the coordinate model of these sources would be extended, it would be possible to use these sources for the NNR condition as well. All other sources are placed in the "others" group. This is the largest group of sources, containing those which have not shown any very problematic behavior, but still do not fulfill the requirements for defining sources. Studies show that the behavior of each source can vary dramatically in time. Hence, each source would have to be modeled individually. Considering this, the shear amount of sources, in our study more than 600 are included, sets practical limitations. We decided to use the multivariate adaptive regression splines (MARS) procedure to parametrize the source coordinates, as they allow a great deal of automation as it combines recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and thus the best number of polynomial pieces to fit the data. We investigate linear and cubic splines determined by MARS to "human" determined linear splines and their impact on the CRF. Within this work we try to answer the following questions: How can we find optimal criteria for the definition of the defining and unstable sources? What are the best polynomials for the individual categories? How much can we improve the CRF by extending the parametrization of the sources?
ERIC Educational Resources Information Center
Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.
2016-01-01
Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…
NASA Technical Reports Server (NTRS)
Schiess, James R.; Kerr, Patricia A.; Smith, Olivia C.
1988-01-01
Smooth curves drawn among plotted data easily. Rational-Spline Approximation with Automatic Tension Adjustment algorithm leads to flexible, smooth representation of experimental data. "Tension" denotes mathematical analog of mechanical tension in spline or other mechanical curve-fitting tool, and "spline" as denotes mathematical generalization of tool. Program differs from usual spline under tension, allows user to specify different values of tension between adjacent pairs of knots rather than constant tension over entire range of data. Subroutines use automatic adjustment scheme that varies tension parameter for each interval until maximum deviation of spline from line joining knots less than or equal to amount specified by user. Procedure frees user from drudgery of adjusting individual tension parameters while still giving control over local behavior of spline.
NASA Technical Reports Server (NTRS)
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
47 CFR 69.121 - Connection charges for expanded interconnection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... separations. (2) Charges for subelements associated with physical collocation or virtual collocation, other... of the virtual collocation equipment described in § 64.1401(e)(1) of this chapter, may reasonably...
An algorithm for surface smoothing with rational splines
NASA Technical Reports Server (NTRS)
Schiess, James R.
1987-01-01
Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.
Variation in Global Chemical Composition of PM2.5: Emerging Results from SPARTAN
NASA Technical Reports Server (NTRS)
Snider, Graydon; Weagle, Crystal L.; Murdymootoo, Kalaivani K.; Ring, Amanda; Ritchie, Yvonne; Stone, Emily; Walsh, Ainsley; Akoshile, Clement; Anh, Nguyen Xuan; Balasubramanian, Rajasekhar;
2016-01-01
The Surface PARTiculate mAtter Network (SPARTAN) is a long-term project that includes characterization of chemical and physical attributes of aerosols from filter samples collected worldwide. This paper discusses the ongoing efforts of SPARTAN to define and quantify major ions and trace metals found in fine particulate matter (PM (sub 2.5). Our methods infer the spatial and temporal variability of PM (sub 2.5) in a cost-effective manner. Gravimetrically weighed filters represent multi-day averages of PM (sub 2.5), with a collocated nephelometer sampling air continuously. SPARTAN instruments are paired with AErosol RObotic NETwork (AERONET) sun photometers to better understand the relationship between ground-level PM (sub 2.5) and columnar aerosol optical depth (AOD). We have examined the chemical composition of PM (sub 2.5) at 12 globally dispersed, densely populated urban locations and a site at Mammoth Cave (US) National Park used as a background comparison. So far, each SPARTAN location has been active between the years 2013 and 2016 over periods of 2-26 months, with an average period of 12 months per site. These sites have collectively gathered over 10 years of quality aerosol data. The major PM (sub 2.5) constituents across all sites (relative contribution plus or minus Standard Deviation) are ammoniated sulfate (20 percent plus or minus 11 percent), crustal material (13.4 percent plus or minus 9.9 percent), equivalent black carbon (11.9 percent plus or minus 8.4 percent), ammonium nitrate (4.7 percent plus or minus 3.0 percent), sea salt (2.3 percent plus or minus 1.6 percent), trace element oxides (1.0 percent plus or minus 1.1 percent), water (7.2 percent plus or minus 3.3 percent) at 35 percent relative humidity, and residual matter (40 percent plus or minus 24 percent). Analysis of filter samples reveals that several PM (sub 2.5) chemical components varied by more than an order of magnitude between sites. Ammoniated sulfate ranges from 1.1 microns per cubic meter (Buenos Aires, Argentina) to 17 microns per cubic meter (Kanpur, India in the dry season). Ammonium nitrate ranged from 0.2 microns per cubic meter (Mammoth Cave, in summer) to 6.8 microns per cubic meter (Kanpur, dry season). Equivalent black carbon ranged from 0.7 microns per cubic meter (Mammoth Cave) to over 8 microns per cubic meter (Dhaka, Bangladesh and Kanpur, India). Comparison of SPARTAN vs. coincident measurements from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network at Mammoth Cave yielded a high degree of consistency for daily PM (sub 2.5) (r squared equals 0.76, slope equals 1.12), daily sulfate (r squared equals 0.86, slope equals 1.03), and mean fractions of all major PM (sub 2.5) components (within 6 percent). Major ions generally agree well with previous studies at the same urban locations (e.g. sulfate fractions agree within 4 percent for 8 out of 11 collocation comparisons). Enhanced anthropogenic dust fractions in large urban areas (e.g. Singapore, Kanpur, Hanoi, and Dhaka) are apparent from high Zn to Al ratios. The expected water contribution to aerosols is calculated via the hygroscopicity parameter kappa (sub v (volume)) for each filter. Mean aggregate values ranged from 0.15 (Ilorin) to 0.28 (Rehovot). The all-site parameter mean is 0.20 plus or minus 0.04. Chemical composition and water retention in each filter measurement allows inference of hourly PM (sub 2.5) at 35 percent relative humidity by merging with nephelometer measurements. These hourly PM (sub 2.5) estimates compare favourably with a beta attenuation monitor (MetOne) at the nearby US embassy in Beijing, with a coefficient of variation r squared equals 0.67 (number equals 3167), compared to r squared equals 0.62 when v (volume) was not considered. SPARTAN continues to provide an open-access database of PM (sub 2.5) compositional filter information and hourly mass collected from a global federation of instruments.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
Numerical Methods Using B-Splines
NASA Technical Reports Server (NTRS)
Shariff, Karim; Merriam, Marshal (Technical Monitor)
1997-01-01
The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.
Fekete, Charles-Antoine Collins; Doolan, Paul; Dias, Marta F; Beaulieu, Luc; Seco, Joao
2015-07-07
To develop an accurate phenomenological model of the cubic spline path estimate of the proton path, accounting for the initial proton energy and water equivalent thickness (WET) traversed. Monte Carlo (MC) simulations were used to calculate the path of protons crossing various WET (10-30 cm) of different material (LN300, water and CB2-50% CaCO3) for a range of initial energies (180-330 MeV). For each MC trajectory, cubic spline trajectories (CST) were constructed based on the entrance and exit information of the protons and compared with the MC using the root mean square (RMS) metric. The CST path is dependent on the direction vector magnitudes (|P0,1|). First, |P0,1| is set to the proton path length (with factor Λ(Norm)(0,1) = 1.0). Then, two optimal factor Λ(0,1) are introduced in |P0,1|. The factors are varied to minimize the RMS difference with MC paths for every configuration. A set of Λ(opt)(0,1) factors, function of WET/water equivalent path length (WEPL), that minimizes the RMS are presented. MTF analysis is then performed on proton radiographs of a line-pair phantom reconstructed using the CST trajectories. Λ(opt)(0,1) was fitted to the WET/WEPL ratio using a quadratic function (Y = A + BX(2) where A = 1.01,0.99, B = 0.43,- 0.46 respectively for Λ(opt)(0), Λ(opt)(1)). The RMS deviation calculated along the path, between the CST and the MC, increases with the WET. The increase is larger when using Λ(Norm)(0,1) than Λ(opt)(0,1) (difference of 5.0% with WET/WEPL = 0.66). For 230/330 MeV protons, the MTF10% was found to increase by 40/16% respectively for a thin phantom (15 cm) when using the Λ(opt)(0,1) model compared to the Λ(Norm)(0,1) model. Calculation times for Λ(opt)(0,1) are scaled down compared to MLP and RMS deviation are similar within standard deviation.B ased on the results of this study, using CST with the Λ(opt)(0,1) factors reduces the RMS deviation and increases the spatial resolution when reconstructing proton trajectories.
Spline screw payload fastening system
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
1993-01-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.
Isogeometric Collocation for Elastostatics and Explicit Dynamics
2012-01-25
ICES REPORT 12-07 January 2012 Isogeometric collocation for elastostatics and explicit dynamics by F. Auricchio, L. Beirao da Veiga , T.J.R. Hughes, A...Auricchio, L. Beirao da Veiga , T.J.R. Hughes, A. Reali, G. Sangalli, Isogeometric collocation for elastostatics and explicit dynamics, ICES REPORT 12-07...Isogeometric collocation for elastostatics and explicit dynamics F. Auricchio a,c, L. Beirão da Veiga b,c, T.J.R. Hughes d, A. Reali a,c,∗, G
2013-08-01
as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the use of B- spline ...FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence...Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions on Medical Imaging. 2001;20(6):526-34
NASA Astrophysics Data System (ADS)
Liu, L. H.; Tan, J. Y.
2007-02-01
A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.
Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture
NASA Technical Reports Server (NTRS)
Desai, Prasun N.; Conway, Bruce A.
2005-01-01
Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.
Collocation and Pattern Recognition Effects on System Failure Remediation
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Press, Hayes N.
2007-01-01
Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.
TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)
A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...
Sokolova, L V; Cherkasova, A S
2015-01-01
Texts or words/pseudowords are often used as stimuli for human verbal activity research. Our study pays attention to decoding processes of grammatical constructions consisted of two-three words--collocations. Russian and English collocation sets without any narrative were presented to Russian-speaking students with different English language skill. Stimulus material had two types of collocations: paradigmatic and syntagmatic. 30 students (average age--20.4 ± 0.22) took part in the study, they were divided into two equal groups depending on their English language skill (linguists/nonlinguists). During reading brain bioelectrical activity of cortex has been registered from 12 electrodes in alfa-, beta-, theta-bands. Coherent function reflecting cooperation of different cortical areas during reading collocations has been analyzed. Increase of interhemispheric and diagonal connections while reading collocations in different languages in the group of students with low knowledge of foreign language testifies of importance of functional cooperation between the hemispheres. It has been found out that brain bioelectrical activity of students with good foreign language knowledge during reading of all collocation types in Russian and English is characterized by economization of nervous substrate resources compared to nonlinguists. Selective activation of certain cortical areas has also been observed (depending on the grammatical construction type) in nonlinguists group that is probably related to special decoding system which processes presented stimuli. Reading Russian paradigmatic constructions by nonlinguists entailed increase between left cortical areas, reading of English syntagmatic collocations--between right ones.
Color management with a hammer: the B-spline fitter
NASA Astrophysics Data System (ADS)
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.
Quantum and anharmonic effects are investigated in (H{sub 2}){sub 2}–Li{sup +}–benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H{sub 2} molecule to the H{sub 2}–Li{sup +}–benzene complex increases the ZPE of the system by 5.6 kJ mol{sup −1} to 17.6 kJ mol{sup −1}. This ZPE is 42% of the total electronic binding energymore » of (H{sub 2}){sub 2}–Li{sup +}–benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H{sub 2} to H{sub 2}–Li{sup +}–benzene is 7.7 kJ mol{sup −1}, compared to 12.4 kJ mol{sup −1} for the first H{sub 2} molecule. Anharmonicity is found to be even more important when a second (and subsequent) H{sub 2} molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H{sub 2} molecules are found at larger distance from the Li{sup +} ion and are more confined in the θ coordinate than in H{sub 2}–Li{sup +}–benzene. They also show that both H{sub 2} molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H{sub 2} molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H{sub 2}){sub 2}–Li{sup +}–benzene PESs are developed. These use a modified Shepard interpolation for the Li{sup +}–benzene and H{sub 2}–Li{sup +}–benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H{sub 2}–H{sub 2} interaction. Because of the neglect of three-body H{sub 2}, H{sub 2}, Li{sup +} terms, both fragment PESs lead to overbinding of the second H{sub 2} molecule by 1.5 kJ mol{sup −1}. Probability density histograms, however, indicate that the wavefunctions for the two H{sub 2} molecules are effectively identical on the “full” and fragment PESs. This suggests that the 1.5 kJ mol{sup −1} error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H{sub 2}–H{sub 2} interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.« less
Learning L2 Collocations Incidentally from Reading
ERIC Educational Resources Information Center
Pellicer-Sánchez, Ana
2017-01-01
Previous studies have shown that intentional learning through explicit instruction is effective for the acquisition of collocations in a second language (L2) (e.g. Peters, 2014, 2015), but relatively little is known about the effectiveness of incidental approaches for the acquisition of L2 collocations. The present study examined the incidental…
Incidental Learning of Collocation
ERIC Educational Resources Information Center
Webb, Stuart; Newton, Jonathan; Chang, Anna
2013-01-01
This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.…
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... accessible by both the incumbent LEC and the collocating telecommunications carrier, at which the fiber optic... technically feasible, the incumbent LEC shall provide the connection using copper, dark fiber, lit fiber, or... that the incumbent LEC may adopt include: (1) Installing security cameras or other monitoring systems...
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... accessible by both the incumbent LEC and the collocating telecommunications carrier, at which the fiber optic... technically feasible, the incumbent LEC shall provide the connection using copper, dark fiber, lit fiber, or... that the incumbent LEC may adopt include: (1) Installing security cameras or other monitoring systems...
Usability Study of Two Collocated Prototype System Displays
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.
2007-01-01
Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
Not Just "Small Potatoes": Knowledge of the Idiomatic Meanings of Collocations
ERIC Educational Resources Information Center
Macis, Marijana; Schmitt, Norbert
2017-01-01
This study investigated learner knowledge of the figurative meanings of 30 collocations that can be both literal and figurative. One hundred and seven Chilean Spanish-speaking university students of English were asked to complete a meaning-recall collocation test in which the target items were embedded in non-defining sentences. Results showed…
Teaching and Learning Collocation in Adult Second and Foreign Language Learning
ERIC Educational Resources Information Center
Boers, Frank; Webb, Stuart
2018-01-01
Perhaps the greatest challenge to creating a research timeline on teaching and learning collocation is deciding how wide to cast the net in the search for relevant publications. For one thing, the term "collocation" does not have the same meaning for all (applied) linguists and practitioners (Barfield & Gyllstad 2009) (see timeline).…
Supporting Collocation Learning with a Digital Library
ERIC Educational Resources Information Center
Wu, Shaoqun; Franken, Margaret; Witten, Ian H.
2010-01-01
Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…
Cross-Linguistic Influence: Its Impact on L2 English Collocation Production
ERIC Educational Resources Information Center
Phoocharoensil, Supakorn
2013-01-01
This research study investigated the influence of learners' mother tongue on their acquisition of English collocations. Having drawn the linguistic data from two groups of Thai EFL learners differing in English proficiency level, the researcher found that the native language (L1) plays a significant role in the participants' collocation learning…
Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations
ERIC Educational Resources Information Center
Liu, Dilin
2010-01-01
Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…
English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information
ERIC Educational Resources Information Center
Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji
2012-01-01
We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…
The Effect of Error Correction Feedback on the Collocation Competence of Iranian EFL Learners
ERIC Educational Resources Information Center
Jafarpour, Ali Akbar; Sharifi, Abolghasem
2012-01-01
Collocations are one of the most important elements in language proficiency but the effect of error correction feedback of collocations has not been thoroughly examined. Some researchers report the usefulness and importance of error correction (Hyland, 1990; Bartram & Walton, 1991; Ferris, 1999; Chandler, 2003), while others showed that error…
Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks
ERIC Educational Resources Information Center
Menon, Sujatha; Mukundan, Jayakaran
2012-01-01
This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…
The Effect of Grouping and Presenting Collocations on Retention
ERIC Educational Resources Information Center
Akpinar, Kadriye Dilek; Bardakçi, Mehmet
2015-01-01
The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types…
ERIC Educational Resources Information Center
Leonardi, Magda
1977-01-01
Discusses the importance of two Firthian themes for language teaching. The first theme, "Restricted Languages," concerns the "microlanguages" of every language (e.g., literary language, scientific, etc.). The second theme, "Collocation," shows that equivalent words in two languages rarely have the same position in…
Corpora and Collocations in Chinese-English Dictionaries for Chinese Users
ERIC Educational Resources Information Center
Xia, Lixin
2015-01-01
The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…
A two-level stochastic collocation method for semilinear elliptic equations with random coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Luoping; Zheng, Bin; Lin, Guang
In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less
Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.; Horn, P. W.
1994-01-01
Most of the time on this project was spent on the trajectory planning problem. The construction is equivalent to the classical spline construction in the case that the system matrix is nilpotent. If the dimension of the system is n then the spline of degree 2n-1 is constructed. This gives a new approach to the construction of splines that is more efficient than the usual construction and at the same time allows the construction of a much larger class of splines. All known classes of splines are reconstructed using the approach of linear control theory. As a numerical analysis tool control theory gives a very good tool for constructing splines. However, for the purposes of trajectory planning it is quite another story. Enclosed in this document are four reports done under this grant.
Hoffman, Risa M.; Leister, Erin; Kacanek, Deborah; Shapiro, David E.; Read, Jennifer S.; Bryson, Yvonne; Currier, Judith S.
2013-01-01
Background Women who use antiretroviral therapy (ART) solely for the prevention of mother-to-child transmission of HIV discontinue postpartum. We hypothesized that women discontinuing ART by 6 weeks postpartum (“discontinuers”) would have elevated postpartum inflammatory biomarker levels relative to women remaining on ART postpartum (“continuers”). Methods Data from HIV-infected pregnant women enrolled in the International Maternal Pediatric Adolescent AIDS Clinical Trials Group P1025 with CD4 counts >350 cells per cubic millimeter before initiating ART or first pregnancy CD4 counts >400 cells per cubic millimeter after starting ART and with available stored plasma samples at >20 weeks of gestation, delivery, and 6 weeks postpartum were analyzed. Plasma samples were tested for highly sensitive C-reactive protein, D-dimer, and interleukin-6. We used longitudinal linear spline regression to model biomarkers over time. Results Data from 128 women (65 continuers and 63 discontinuers) were analyzed. All biomarkers increased from late pregnancy to delivery, then decreased postpartum (slopes different from 0, P < 0.001). Continuers had a steeper decrease in log D-dimer between delivery and 6 weeks postpartum than discontinuers (P = 0.002). Conclusions In contrast to results from treatment interruption studies in adults, both ART continuers and ART discontinuers had significant decreases in the levels of D-dimer, highly sensitive C-reactive protein, or interleukin-6 postpartum. Continuation was associated with a more rapid decline in D-dimer levels compared with discontinuation. PMID:23714738
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Keller, J.; Wallen, R.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1977-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.
On the Effect of Gender and Years of Instruction on Iranian EFL Learners' Collocational Competence
ERIC Educational Resources Information Center
Ganji, Mansoor
2012-01-01
This study investigates the Iranian EFL learners' Knowledge of Lexical Collocation at three academic levels: freshmen, sophomores, and juniors. The participants were forty three English majors doing their B.A. in English Translation studies in Chabahar Maritime University. They took a 50-item fill-in-the-blank test of lexical collocations. The…
ERIC Educational Resources Information Center
Gheisari, Nouzar; Yousofi, Nouroldin
2016-01-01
The effectiveness of different teaching methods of collocational expressions in ESL/EFL contexts of education has been a point of debate for more than two decades, with some believing in explicit and the others in implicit instruction of collocations. In this regard, the present study aimed at finding about which kind of instruction is more…
ERIC Educational Resources Information Center
Krummes, Cedric; Ensslin, Astrid
2015-01-01
Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern…
Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing
ERIC Educational Resources Information Center
Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh
2015-01-01
Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of…
Profiling the Collocation Use in ELT Textbooks and Learner Writing
ERIC Educational Resources Information Center
Tsai, Kuei-Ju
2015-01-01
The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…
Learning and Teaching L2 Collocations: Insights from Research
ERIC Educational Resources Information Center
Szudarski, Pawel
2017-01-01
The aim of this article is to present and summarize the main research findings in the area of learning and teaching second language (L2) collocations. Being a large part of naturally occurring language, collocations and other types of multiword units (e.g., idioms, phrasal verbs, lexical bundles) have been identified as important aspects of L2…
B-spline Method in Fluid Dynamics
NASA Technical Reports Server (NTRS)
Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)
2001-01-01
B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.
Interpolation by new B-splines on a four directional mesh of the plane
NASA Astrophysics Data System (ADS)
Nouisser, O.; Sbibih, D.
2004-01-01
In this paper we construct new simple and composed B-splines on the uniform four directional mesh of the plane, in order to improve the approximation order of B-splines studied in Sablonniere (in: Program on Spline Functions and the Theory of Wavelets, Proceedings and Lecture Notes, Vol. 17, University of Montreal, 1998, pp. 67-78). If φ is such a simple B-spline, we first determine the space of polynomials with maximal total degree included in , and we prove some results concerning the linear independence of the family . Next, we show that the cardinal interpolation with φ is correct and we study in S(φ) a Lagrange interpolation problem. Finally, we define composed B-splines by repeated convolution of φ with the characteristic functions of a square or a lozenge, and we give some of their properties.
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, S.
This report describes the use of several subroutines from the CORLIB core mathematical subroutine library for the solution of a model fluid flow problem. The model consists of the Euler partial differential equations. The equations are spatially discretized using the method of pseudo-characteristics. The resulting system of ordinary differential equations is then integrated using the method of lines. The stiff ordinary differential equation solver LSODE (2) from CORLIB is used to perform the time integration. The non-stiff solver ODE (4) is used to perform a related integration. The linear equation solver subroutines DECOMP and SOLVE are used to solve linearmore » systems whose solutions are required in the calculation of the time derivatives. The monotone cubic spline interpolation subroutines PCHIM and PCHFE are used to approximate water properties. The report describes the use of each of these subroutines in detail. It illustrates the manner in which modules from a standard mathematical software library such as CORLIB can be used as building blocks in the solution of complex problems of practical interest. 9 refs., 2 figs., 4 tabs.« less
NASA Astrophysics Data System (ADS)
Potra, F. L.; Potra, T.; Soporan, V. F.
We propose two optimization methods of the processes which appear in EDM (Electrical Discharge Machining). First refers to the introduction of a new function approximating the thermal flux energy in EDM machine. Classical researches approximate this energy with the Gauss' function. In the case of unconventional technology the Gauss' bell became null only for r → +∞, where r is the radius of crater produced by EDM. We introduce a cubic spline regression which descends to zero at the crater's boundary. In the second optimization we propose modifications in technologies' work regarding the displacement of the tool electrode to the piece electrode such that the material melting to be realized in optimal time and the feeding speed with dielectric liquid regarding the solidification of the expulsed material. This we realize using the FAHP algorithm based on the theory of eigenvalues and eigenvectors, which lead to mean values of best approximation. [6
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1985-01-01
An approximation scheme is developed for the identification of hybrid systems describing the transverse vibrations of flexible beams with attached tip bodies. In particular, problems involving the estimation of functional parameters are considered. The identification problem is formulated as a least squares fit to data subject to the coupled system of partial and ordinary differential equations describing the transverse displacement of the beam and the motion of the tip bodies respectively. A cubic spline-based Galerkin method applied to the state equations in weak form and the discretization of the admissible parameter space yield a sequence of approximating finite dimensional identification problems. It is shown that each of the approximating problems admits a solution and that from the resulting sequence of optimal solutions a convergent subsequence can be extracted, the limit of which is a solution to the original identification problem. The approximating identification problems can be solved using standard techniques and readily available software.
Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model
NASA Astrophysics Data System (ADS)
Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose
1999-01-01
This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.
Simulation the Effect of Internal Wave on the Acoustic Propagation
NASA Astrophysics Data System (ADS)
Ko, D. S.
2005-05-01
An acoustic radiation transport model with the Monte Carlo solution has been developed and applied to study the effect of internal wave induced random oceanic fluctuations on the deep ocean acoustic propagation. Refraction in the ocean sound channel is performed by means of bi-cubic spline interpolation of discrete deterministic ray paths in the angle(energy)-range-depth coordinates. Scattering by random internal wave fluctuations is accomplished by sampling a power law scattering kernel applying the rejection method. Results from numerical experiments show that the mean positions of acoustic rays are significantly displaced tending toward the sound channel axis due to the asymmetry of the scattering kernel. The spreading of ray depths and angles about the means depends strongly on frequency. The envelope of the ray displacement spreading is found to be proportional to the square root of range which is different from "3/2 law" found in the non-channel case. Suppression of the spreading is due to the anisotropy of fluctuations and especially due to the presence of sound channel itself.
Sensor-Based Optimization Model for Air Quality Improvement in Home IoT
Kim, Jonghyuk
2018-01-01
We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market. PMID:29570684
Sensor-Based Optimization Model for Air Quality Improvement in Home IoT.
Kim, Jonghyuk; Hwangbo, Hyunwoo
2018-03-23
We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market.
Surface fitting three-dimensional bodies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1974-01-01
The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.
Computing frequency by using generalized zero-crossing applied to intrinsic mode functions
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2006-01-01
This invention presents a method for computing Instantaneous Frequency by applying Empirical Mode Decomposition to a signal and using Generalized Zero-Crossing (GZC) and Extrema Sifting. The GZC approach is the most direct, local, and also the most accurate in the mean. Furthermore, this approach will also give a statistical measure of the scattering of the frequency value. For most practical applications, this mean frequency localized down to quarter of a wave period is already a well-accepted result. As this method physically measures the period, or part of it, the values obtained can serve as the best local mean over the period to which it applies. Through Extrema Sifting, instead of the cubic spline fitting, this invention constructs the upper envelope and the lower envelope by connecting local maxima points and local minima points of the signal with straight lines, respectively, when extracting a collection of Intrinsic Mode Functions (IMFs) from a signal under consideration.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
Berretta, Massimiliano; Micek, Agnieszka; Lafranconi, Alessandra; Rossetti, Sabrina; Di Francia, Raffaele; De Paoli, Paolo; Rossi, Paola; Facchini, Gaetano
2018-04-17
Coffee consumption has been associated with numerous cancers, but evidence on ovarian cancer risk is controversial. Therefore, we performed a meta-analysis on prospective cohort studies in order to review the evidence on coffee consumption and risk of ovarian cancer. Studies were identified through searching the PubMed and MEDLINE databases up to March 2017. Risk estimates were retrieved from the studies, and dose-response analysis was modelled by using restricted cubic splines. Additionally, a stratified analysis by menopausal status was performed. A total of 8 studies were eligible for the dose-response meta-analysis. Studies included in the analysis comprised 787,076 participants and 3,541 ovarian cancer cases. The results showed that coffee intake was not associated with ovarian cancer risk (RR = 1.06, 95% CI: 0.89, 1.26). Stratified and subgroup analysis showed consisted results. This comprehensive meta-analysis did not find evidence of an association between the consumption of coffee and risk of ovarian cancer.
Conway, Sadie H; Pompeii, Lisa A; Roberts, Robert E; Follis, Jack L; Gimeno, David
2016-03-01
The aim of this study was to examine the presence of a dose-response relationship between work hours and incident cardiovascular disease (CVD) in a representative sample of U.S. workers. A retrospective cohort study of 1926 individuals from the Panel Study of Income Dynamics (1986 to 2011) employed for at least 10 years. Restricted cubic spline regression was used to estimate the dose-response relationship of work hours with CVD. A dose-response relationship was observed in which an average workweek of 46 hours or more for at least 10 years was associated with an increased risk of CVD. Compared with working 45 hours per week, working an additional 10 hours per week or more for at least 10 years increased CVD risk by at least 16%. Working more than 45 work hours per week for at least 10 years may be an independent risk factor for CVD.
Three-dimensional body scanning system for apparel mass-customization
NASA Astrophysics Data System (ADS)
Xu, Bugao; Huang, Yaxiong; Yu, Weiping; Chen, Tong
2002-07-01
Mass customization is a new manufacturing trend in which mass-market products (e.g., apparel) are quickly modified one at a time based on customers' needs. It is an effective competing strategy for maximizing customers' satisfaction and minimizing inventory costs. An automatic body measurement system is essential for apparel mass customization. This paper introduces the development of a body scanning system, body size extraction methods, and body modeling algorithms. The scanning system utilizes the multiline triangulation technique to rapidly acquire surface data on a body, and provides accurate body measurements, many of which are not available with conventional methods. Cubic B-spline curves are used to connect and smooth body curves. From the scanned data, a body form can be constructed using linear Coons surfaces. The body form can be used as a digital model of the body for 3-D garment design and for virtual try-on of a designed garment. This scanning system and its application software enable apparel manufacturers to provide custom design services to consumers seeking personal-fit garments.
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
NASA Astrophysics Data System (ADS)
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Qingpeng; Dinan, James; Tirukkovalur, Sravya
2016-01-28
Quantum Monte Carlo (QMC) applications perform simulation with respect to an initial state of the quantum mechanical system, which is often captured by using a cubic B-spline basis. This representation is stored as a read-only table of coefficients and accesses to the table are generated at random as part of the Monte Carlo simulation. Current QMC applications, such as QWalk and QMCPACK, replicate this table at every process or node, which limits scalability because increasing the number of processors does not enable larger systems to be run. We present a partitioned global address space approach to transparently managing this datamore » using Global Arrays in a manner that allows the memory of multiple nodes to be aggregated. We develop an automated data management system that significantly reduces communication overheads, enabling new capabilities for QMC codes. Experimental results with QWalk and QMCPACK demonstrate the effectiveness of the data management system.« less
Dou, Chao
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. PMID:28090205
Miao, Beibei; Dou, Chao; Jin, Xuebo
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. .
On the spline-based wavelet differentiation matrix
NASA Technical Reports Server (NTRS)
Jameson, Leland
1993-01-01
The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.
2015-12-01
ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c
Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases
NASA Astrophysics Data System (ADS)
Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre
2011-12-01
Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.
Cannabis smoking and lung cancer risk: Pooled analysis in the International Lung Cancer Consortium
Zhang, Li Rita; Morgenstern, Hal; Greenland, Sander; Chang, Shen-Chih; Lazarus, Philip; Teare, M. Dawn; Woll, Penella J.; Orlow, Irene; Cox, Brian; Brhane, Yonathan; Liu, Geoffrey; Hung, Rayjean J.
2014-01-01
To investigate the association between cannabis smoking and lung cancer risk, data on 2,159 lung cancer cases and 2,985 controls were pooled from 6 case-control studies in the US, Canada, UK, and New Zealand within the International Lung Cancer Consortium. Study-specific associations between cannabis smoking and lung cancer were estimated using unconditional logistic regression adjusting for sociodemographic factors, tobacco smoking status and pack-years; odds-ratio estimates were pooled using random effects models. Subgroup analyses were done for sex, histology and tobacco smoking status. The shapes of dose-response associations were examined using restricted cubic spline regression. The overall pooled OR for habitual versus nonhabitual or never users was 0.96 (95% CI: 0.66–1.38). Compared to nonhabitual or never users, the summary OR was 0.88 (95%CI: 0.63–1.24) for individuals who smoked 1 or more joint-equivalents of cannabis per day and 0.94 (95%CI: 0.67–1.32) for those consumed at least 10 joint-years. For adenocarcinoma cases the ORs were 1.73 (95%CI: 0.75–4.00) and 1.74 (95%CI: 0.85–3.55), respectively. However, no association was found for the squamous cell carcinoma based on small numbers. Weak associations between cannabis smoking and lung cancer were observed in never tobacco smokers. Spline modeling indicated a weak positive monotonic association between cumulative cannabis use and lung cancer, but precision was low at high exposure levels. Results from our pooled analyses provide little evidence for an increased risk of lung cancer among habitual or long-term cannabis smokers, although the possibility of potential adverse effect for heavy consumption cannot be excluded. PMID:24947688
Virji, M. Abbas; Trapnell, Bruce C.; Carey, Brenna; Healey, Terrance; Kreiss, Kathleen
2014-01-01
Rationale: Occupational exposure to indium compounds, including indium–tin oxide, can result in potentially fatal indium lung disease. However, the early effects of exposure on the lungs are not well understood. Objectives: To determine the relationship between short-term occupational exposures to indium compounds and the development of early lung abnormalities. Methods: Among indium–tin oxide production and reclamation facility workers, we measured plasma indium, respiratory symptoms, pulmonary function, chest computed tomography, and serum biomarkers of lung disease. Relationships between plasma indium concentration and health outcome variables were evaluated using restricted cubic spline and linear regression models. Measurements and Main Results: Eighty-seven (93%) of 94 indium–tin oxide facility workers (median tenure, 2 yr; median plasma indium, 1.0 μg/l) participated in the study. Spirometric abnormalities were not increased compared with the general population, and few subjects had radiographic evidence of alveolar proteinosis (n = 0), fibrosis (n = 2), or emphysema (n = 4). However, in internal comparisons, participants with plasma indium concentrations ≥ 1.0 μg/l had more dyspnea, lower mean FEV1 and FVC, and higher median serum Krebs von den Lungen-6 and surfactant protein-D levels. Spline regression demonstrated nonlinear exposure response, with significant differences occurring at plasma indium concentrations as low as 1.0 μg/l compared with the reference. Associations between health outcomes and the natural log of plasma indium concentration were evident in linear regression models. Associations were not explained by age, smoking status, facility tenure, or prior occupational exposures. Conclusions: In indium–tin oxide facility workers with short-term, low-level exposure, plasma indium concentrations lower than previously reported were associated with lung symptoms, decreased spirometric parameters, and increased serum biomarkers of lung disease. PMID:25295756
Dose-response relationship between sports activity and musculoskeletal pain in adolescents.
Kamada, Masamitsu; Abe, Takafumi; Kitayuguchi, Jun; Imamura, Fumiaki; Lee, I-Min; Kadowaki, Masaru; Sawada, Susumu S; Miyachi, Motohiko; Matsui, Yuzuru; Uchio, Yuji
2016-06-01
Physical activity has multiple health benefits but may also increase the risk of developing musculoskeletal pain (MSP). However, the relationship between physical activity and MSP has not been well characterized. This study examined the dose-response relationship between sports activity and MSP among adolescents. Two school-based serial surveys were conducted 1 year apart in adolescents aged 12 to 18 years in Unnan, Japan. Self-administered questionnaires were completed by 2403 students. Associations between time spent in organized sports activity and MSP were analyzed cross-sectionally (n = 2403) and longitudinally (n = 374, students free of pain and in seventh or 10th grade at baseline) with repeated-measures Poisson regression and restricted cubic splines, with adjustment for potential confounders. The prevalence of overall pain, defined as having pain recently at least several times a week in at least one part of the body, was 27.4%. In the cross-sectional analysis, sports activity was significantly associated with pain prevalence. Each additional 1 h/wk of sports activity was associated with a 3% higher probability of having pain (prevalence ratio = 1.03, 95% confidence interval = 1.02-1.04). Similar trends were found across causes (traumatic and nontraumatic pain) and anatomic locations (upper limbs, lower back, and lower limbs). In longitudinal analysis, the risk ratio for developing pain at 1-year follow-up per 1 h/wk increase in baseline sports activity was 1.03 (95% confidence interval = 1.02-1.05). Spline models indicated a linear association (P < 0.001) but not a nonlinear association (P ≥ 0.45). The more the adolescents played sports, the more likely they were to have and develop pain.
Dose–response relationship between sports activity and musculoskeletal pain in adolescents
Kamada, Masamitsu; Abe, Takafumi; Kitayuguchi, Jun; Imamura, Fumiaki; Lee, I-Min; Kadowaki, Masaru; Sawada, Susumu S.; Miyachi, Motohiko; Matsui, Yuzuru; Uchio, Yuji
2016-01-01
Abstract Physical activity has multiple health benefits but may also increase the risk of developing musculoskeletal pain (MSP). However, the relationship between physical activity and MSP has not been well characterized. This study examined the dose–response relationship between sports activity and MSP among adolescents. Two school-based serial surveys were conducted 1 year apart in adolescents aged 12 to 18 years in Unnan, Japan. Self-administered questionnaires were completed by 2403 students. Associations between time spent in organized sports activity and MSP were analyzed cross-sectionally (n = 2403) and longitudinally (n = 374, students free of pain and in seventh or 10th grade at baseline) with repeated-measures Poisson regression and restricted cubic splines, with adjustment for potential confounders. The prevalence of overall pain, defined as having pain recently at least several times a week in at least one part of the body, was 27.4%. In the cross-sectional analysis, sports activity was significantly associated with pain prevalence. Each additional 1 h/wk of sports activity was associated with a 3% higher probability of having pain (prevalence ratio = 1.03, 95% confidence interval = 1.02-1.04). Similar trends were found across causes (traumatic and nontraumatic pain) and anatomic locations (upper limbs, lower back, and lower limbs). In longitudinal analysis, the risk ratio for developing pain at 1-year follow-up per 1 h/wk increase in baseline sports activity was 1.03 (95% confidence interval = 1.02-1.05). Spline models indicated a linear association (P < 0.001) but not a nonlinear association (P ≥ 0.45). The more the adolescents played sports, the more likely they were to have and develop pain. PMID:26894915
Epidemiology of Road Traffic Incidents in Peru 1973–2008: Incidence, Mortality, and Fatality
Miranda, J. Jaime; López-Rivera, Luis A.; Quistberg, D. Alex; Rosales-Mayor, Edmundo; Gianella, Camila; Paca-Palao, Ada; Luna, Diego; Huicho, Luis; Paca, Ada; Luis, López; Luna, Diego; Rosales, Edmundo; Best, Pablo; Best, Pablo; Egúsquiza, Miriam; Gianella, Camila; Lema, Claudia; Ludeña, Esperanza; Miranda, J. Jaime; Huicho, Luis
2014-01-01
Background The epidemiological profile and trends of road traffic injuries (RTIs) in Peru have not been well-defined, though this is a necessary step to address this significant public health problem in Peru. The objective of this study was to determine trends of incidence, mortality, and fatality of RTIs in Peru during 1973–2008, as well as their relationship to population trends such as economic growth. Methods and Findings Secondary aggregated databases were used to estimate incidence, mortality and fatality rate ratios (IRRs) of RTIs. These estimates were standardized to age groups and sex of the 2008 Peruvian population. Negative binomial regression and cubic spline curves were used for multivariable analysis. During the 35-year period there were 952,668 road traffic victims, injured or killed. The adjusted yearly incidence of RTIs increased by 3.59 (95% CI 2.43–5.31) on average. We did not observe any significant trends in the yearly mortality rate. The total adjusted yearly fatality rate decreased by 0.26 (95% CI 0.15–0.43), while among adults the fatality rate increased by 1.25 (95% CI 1.09–1.43). Models fitted with splines suggest that the incidence follows a bimodal curve and closely followed trends in the gross domestic product (GDP) per capita Conclusions The significant increasing incidence of RTIs in Peru affirms their growing threat to public health. A substantial improvement of information systems for RTIs is needed to create a more accurate epidemiologic profile of RTIs in Peru. This approach can be of use in other similar low and middle-income settings to inform about the local challenges posed by RTIs. PMID:24927195
Epidemiology of road traffic incidents in Peru 1973-2008: incidence, mortality, and fatality.
Miranda, J Jaime; López-Rivera, Luis A; Quistberg, D Alex; Rosales-Mayor, Edmundo; Gianella, Camila; Paca-Palao, Ada; Luna, Diego; Huicho, Luis; Paca, Ada
2014-01-01
The epidemiological profile and trends of road traffic injuries (RTIs) in Peru have not been well-defined, though this is a necessary step to address this significant public health problem in Peru. The objective of this study was to determine trends of incidence, mortality, and fatality of RTIs in Peru during 1973-2008, as well as their relationship to population trends such as economic growth. Secondary aggregated databases were used to estimate incidence, mortality and fatality rate ratios (IRRs) of RTIs. These estimates were standardized to age groups and sex of the 2008 Peruvian population. Negative binomial regression and cubic spline curves were used for multivariable analysis. During the 35-year period there were 952,668 road traffic victims, injured or killed. The adjusted yearly incidence of RTIs increased by 3.59 (95% CI 2.43-5.31) on average. We did not observe any significant trends in the yearly mortality rate. The total adjusted yearly fatality rate decreased by 0.26 (95% CI 0.15-0.43), while among adults the fatality rate increased by 1.25 (95% CI 1.09-1.43). Models fitted with splines suggest that the incidence follows a bimodal curve and closely followed trends in the gross domestic product (GDP) per capita. The significant increasing incidence of RTIs in Peru affirms their growing threat to public health. A substantial improvement of information systems for RTIs is needed to create a more accurate epidemiologic profile of RTIs in Peru. This approach can be of use in other similar low and middle-income settings to inform about the local challenges posed by RTIs.
Liu, Yaoming; Cohen, Mark E; Hall, Bruce L; Ko, Clifford Y; Bilimoria, Karl Y
2016-08-01
The American College of Surgeon (ACS) NSQIP Surgical Risk Calculator has been widely adopted as a decision aid and informed consent tool by surgeons and patients. Previous evaluations showed excellent discrimination and combined discrimination and calibration, but model calibration alone, and potential benefits of recalibration, were not explored. Because lack of calibration can lead to systematic errors in assessing surgical risk, our objective was to assess calibration and determine whether spline-based adjustments could improve it. We evaluated Surgical Risk Calculator model calibration, as well as discrimination, for each of 11 outcomes modeled from nearly 3 million patients (2010 to 2014). Using independent random subsets of data, we evaluated model performance for the Development (60% of records), Validation (20%), and Test (20%) datasets, where prediction equations from the Development dataset were recalibrated using restricted cubic splines estimated from the Validation dataset. We also evaluated performance on data subsets composed of higher-risk operations. The nonrecalibrated Surgical Risk Calculator performed well, but there was a slight tendency for predicted risk to be overestimated for lowest- and highest-risk patients and underestimated for moderate-risk patients. After recalibration, this distortion was eliminated, and p values for miscalibration were most often nonsignificant. Calibration was also excellent for subsets of higher-risk operations, though observed calibration was reduced due to instability associated with smaller sample sizes. Performance of NSQIP Surgical Risk Calculator models was shown to be excellent and improved with recalibration. Surgeons and patients can rely on the calculator to provide accurate estimates of surgical risk. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Towards a threshold climate for emergency lower respiratory hospital admissions.
Islam, Muhammad Saiful; Chaussalet, Thierry J; Koizumi, Naoru
2017-02-01
Identification of 'cut-points' or thresholds of climate factors would play a crucial role in alerting risks of climate change and providing guidance to policymakers. This study investigated a 'Climate Threshold' for emergency hospital admissions of chronic lower respiratory diseases by using a distributed lag non-linear model (DLNM). We analysed a unique longitudinal dataset (10 years, 2000-2009) on emergency hospital admissions, climate, and pollution factors for the Greater London. Our study extends existing work on this topic by considering non-linearity, lag effects between climate factors and disease exposure within the DLNM model considering B-spline as smoothing technique. The final model also considered natural cubic splines of time since exposure and 'day of the week' as confounding factors. The results of DLNM indicated a significant improvement in model fitting compared to a typical GLM model. The final model identified the thresholds of several climate factors including: high temperature (≥27°C), low relative humidity (≤ 40%), high Pm10 level (≥70-µg/m 3 ), low wind speed (≤ 2 knots) and high rainfall (≥30mm). Beyond the threshold values, a significantly higher number of emergency admissions due to lower respiratory problems would be expected within the following 2-3 days after the climate shift in the Greater London. The approach will be useful to initiate 'region and disease specific' climate mitigation plans. It will help identify spatial hot spots and the most sensitive areas and population due to climate change, and will eventually lead towards a diversified health warning system tailored to specific climate zones and populations. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Ying, Yang
2015-01-01
This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…
ERIC Educational Resources Information Center
Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin
2008-01-01
Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…
ERIC Educational Resources Information Center
Heidrick, Ingrid T.
2017-01-01
This study compares monolinguals and different kinds of bilinguals with respect to their knowledge of the type of lexical phenomenon known as collocation. Collocations are word combinations that speakers use recurrently, forming the basis of conventionalized lexical patterns that are shared by a linguistic community. Examples of collocations…
Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias
2015-01-01
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading. PMID:25816246
Wavelet based free-form deformations for nonrigid registration
NASA Astrophysics Data System (ADS)
Sun, Wei; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
Student Support for Research in Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.
1999-01-01
Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.
2013-08-01
transformation models, such as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the...research project. References: 1. Bookstein FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern...Rohr K, Stiehl HS, Sprengel R, Buzug TM, Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions
Bayesian B-spline mapping for dynamic quantitative traits.
Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong
2012-04-01
Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.
Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge
ERIC Educational Resources Information Center
Wolter, Brent; Gyllstad, Henrik
2011-01-01
This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the…
ERIC Educational Resources Information Center
Jaen, Maria Moreno
2007-01-01
This paper reports an assessment of the collocational competence of students of English Linguistics at the University of Granada. This was carried out to meet a two-fold purpose. On the one hand, we aimed to establish a solid corpus-driven approach based upon a systematic and reliable framework for the evaluation of collocational competence in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruberti, M.; Averbukh, V.; Decleva, P.
2014-10-28
We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also presentmore » the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.« less
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1976-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.
Multicategorical Spline Model for Item Response Theory.
ERIC Educational Resources Information Center
Abrahamowicz, Michal; Ramsay, James O.
1992-01-01
A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Fitting multidimensional splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Analyzing degradation data with a random effects spline regression model
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
2017-03-17
This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.
The algorithms for rational spline interpolation of surfaces
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1986-01-01
Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.
Analyzing degradation data with a random effects spline regression model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.
NASA Technical Reports Server (NTRS)
Eren, K.
1980-01-01
The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.
Sequential deconvolution from wave-front sensing using bivariate simplex splines
NASA Astrophysics Data System (ADS)
Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai
2015-05-01
Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.
Recent advances in numerical PDEs
NASA Astrophysics Data System (ADS)
Zuev, Julia Michelle
In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.
TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION
NASA Technical Reports Server (NTRS)
Smith, R. E.
1994-01-01
TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.
An Examination of New Paradigms for Spline Approximations.
Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A
2006-01-01
Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.
Conformal Solid T-spline Construction from Boundary T-spline Representations
2012-07-01
TITLE AND SUBTITLE Conformal Solid T-spline Construction from Boundary T-spline Representations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...Zhang’s ONR-YIP award N00014-10-1-0698 and an ONR Grant N00014-08-1-0653. The work of T. J.R. Hughes was supported by ONR Grant N00014-08-1-0992, NSF...GOALI CMI-0700807/0700204, NSF CMMI-1101007 and a SINTEF grant UTA10-000374. References 1. M. Aigner, C. Heinrich, B. Jüttler, E. Pilgerstorfer, B
A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.
Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao
2016-01-01
The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
Avsec, Žiga; Cheng, Jun; Gagneur, Julien
2018-01-01
Abstract Motivation Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Results Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Availability and implementation Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at https://github.com/gagneurlab/Manuscript_Avsec_Bioinformatics_2017. Contact avsec@in.tum.de or gagneur@in.tum.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29155928
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi; Errichello, Robert
2013-08-29
An analytical model is developed to evaluate the design of a spline coupling. For a given torque and shaft misalignment, the model calculates the number of teeth in contact, tooth loads, stiffnesses, stresses, and safety factors. The analytic model provides essential spline coupling design and modeling information and could be easily integrated into gearbox design and simulation tools.
Design, Test, and Evaluation of a Transonic Axial Compressor Rotor with Splitter Blades
2013-09-01
parameters .......................................................17 Figure 13. Third-order spline fit for blade camber line distribution...18 Figure 14. Third-order spline fit for blade thickness distribution .....................................19 Figure 15. Blade...leading edge: third-order spline fit for thickness distribution ...............20 Figure 16. Blade leading edge and trailing edge slope blending
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
Comparison of Implicit Collocation Methods for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)
2001-01-01
We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.
2013-01-01
Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.
A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images
Wang, Yangping; Wang, Song
2016-01-01
The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
Spline curve matching with sparse knot sets
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2004-01-01
This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
NASA Astrophysics Data System (ADS)
Gilliot, Mickaël; Hadjadj, Aomar; Stchakovsky, Michel
2017-11-01
An original method of ellipsometric data inversion is proposed based on the use of constrained splines. The imaginary part of the dielectric function is represented by a series of splines, constructed with particular constraints on slopes at the node boundaries to avoid well-know oscillations of natural splines. The nodes are used as fit parameters. The real part is calculated using Kramers-Kronig relations. The inversion can be performed in successive inversion steps with increasing resolution. This method is used to characterize thin zinc oxide layers obtained by a sol-gel and spin-coating process, with a particular recipe yielding very thin layers presenting nano-porosity. Such layers have particular optical properties correlated with thickness, morphological and structural properties. The use of the constrained spline method is particularly efficient for such materials which may not be easily represented by standard dielectric function models.
NASA Astrophysics Data System (ADS)
Hast, J.; Okkonen, M.; Heikkinen, H.; Krehut, L.; Myllylä, R.
2006-06-01
A self-mixing interferometer is proposed to measure nanometre-scale optical path length changes in the interferometer's external cavity. As light source, the developed technique uses a blue emitting GaN laser diode. An external reflector, a silicon mirror, driven by a piezo nanopositioner is used to produce an interference signal which is detected with the monitor photodiode of the laser diode. Changing the optical path length of the external cavity introduces a phase difference to the interference signal. This phase difference is detected using a signal processing algorithm based on Pearson's correlation coefficient and cubic spline interpolation techniques. The results show that the average deviation between the measured and actual displacements of the silicon mirror is 3.1 nm in the 0-110 nm displacement range. Moreover, the measured displacements follow linearly the actual displacement of the silicon mirror. Finally, the paper considers the effects produced by the temperature and current stability of the laser diode as well as dispersion effects in the external cavity of the interferometer. These reduce the sensor's measurement accuracy especially in long-term measurements.
NASA Astrophysics Data System (ADS)
Tan, Rui Shan; Zhai, Huan Chen; Yan, Wei; Gao, Feng; Lin, Shi Ying
2017-04-01
A new ab initio potential energy surface (PES) for the ground state of Li + HCl reactive system has been constructed by three-dimensional cubic spline interpolation of 36 654 ab initio points computed at the MRCI+Q/aug-cc-pV5Z level of theory. The title reaction is found to be exothermic by 5.63 kcal/mol (9 kcal/mol with zero point energy corrections), which is very close to the experimental data. The barrier height, which is 2.99 kcal/mol (0.93 kcal/mol for the vibrationally adiabatic barrier height), and the depth of van der Waals minimum located near the entrance channel are also in excellent agreement with the experimental findings. This study also identified two more van der Waals minima. The integral cross sections, rate constants, and their dependence on initial rotational states are calculated using an exact quantum wave packet method on the new PES. They are also in excellent agreement with the experimental measurements.
Highly accurate adaptive TOF determination method for ultrasonic thickness measurement
NASA Astrophysics Data System (ADS)
Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing
2018-04-01
Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.
Grid generation and surface modeling for CFD
NASA Technical Reports Server (NTRS)
Connell, Stuart D.; Sober, Janet S.; Lamson, Scott H.
1995-01-01
When computing the flow around complex three dimensional configurations, the generation of the mesh is the most time consuming part of any calculation. With some meshing technologies this can take of the order of a man month or more. The requirement for a number of design iterations coupled with ever decreasing time allocated for design leads to the need for a significant acceleration of this process. Of the two competing approaches, block-structured and unstructured, only the unstructured approach will allow fully automatic mesh generation directly from a CAD model. Using this approach coupled with the techniques described in this paper, it is possible to reduce the mesh generation time from man months to a few hours on a workstation. The desire to closely couple a CFD code with a design or optimization algorithm requires that the changes to the geometry be performed quickly and in a smooth manner. This need for smoothness necessitates the use of Bezier polynomials in place of the more usual NURBS or cubic splines. A two dimensional Bezier polynomial based design system is described.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
A spectral reflectance estimation technique using multispectral data from the Viking lander camera
NASA Technical Reports Server (NTRS)
Park, S. K.; Huck, F. O.
1976-01-01
A technique is formulated for constructing spectral reflectance curve estimates from multispectral data obtained with the Viking lander camera. The multispectral data are limited to six spectral channels in the wavelength range from 0.4 to 1.1 micrometers and most of these channels exhibit appreciable out-of-band response. The output of each channel is expressed as a linear (integral) function of the (known) solar irradiance, atmospheric transmittance, and camera spectral responsivity and the (unknown) spectral responsivity and the (unknown) spectral reflectance. This produces six equations which are used to determine the coefficients in a representation of the spectral reflectance as a linear combination of known basis functions. Natural cubic spline reflectance estimates are produced for a variety of materials that can be reasonably expected to occur on Mars. In each case the dominant reflectance features are accurately reproduced, but small period features are lost due to the limited number of channels. This technique may be a valuable aid in selecting the number of spectral channels and their responsivity shapes when designing a multispectral imaging system.
NASA Astrophysics Data System (ADS)
Kimel'blat, V. I.; Volfson, S. I.; Chebotareva, I. G.; Malysheva, T. V.
1998-09-01
Pressure relaxation was examined in the cylinder of an MPT Monsanto processability tester after stopping the piston. The experimental function of the pressure drop F(t) was smoothed over and approximated by cubic splines. The spectra of pressure relaxation times (SPRT) were obtained according to the method of Schwarzl-Staverman. The SPRT method served well for estimating the spectra of the molecular-mass distribution (MMD) of polymers close in their physical sense to the SPRT. The correlation of the characteristic relaxation times and average molecular mass of ethylene-propylene rubbers and polyethylenes obtained by gel permeation chromatography was approximated by optimum models used for calculating the the molecular mass of rubbers according to the measurement results of the relaxation pressure of melts. The SPRT and characteristic relaxation times were used to analyze the significant technical properties of compositions based on polyethylene and rubber. The SPRT method was used to examine the failure of the cure network of butyl rubber and the dependence of the mechanical properties of thermoplastic elastomers on the molecular features of the decomposite.
Fault zone structure determined through the analysis of earthquake arrival times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelini, A.
1991-10-01
This thesis develops and applies a technique for the simultaneous determination of P and S wave velocity models and hypocenters from a set of arrival times. The velocity models are parameterized in terms of cubic B-splines basis functions which permit the retrieval of smooth models that can be used directly for generation of synthetic seismograms using the ray method. In addition, this type of smoothing limits the rise of instabilities related to the poor resolving power of the data. V{sub P}/V{sub S} ratios calculated from P and S models display generally instabilities related to the different ray-coverages of compressional andmore » shear waves. However, V{sub P}/V{sub S} ratios are important for correct identification of rock types and this study introduces a new methodology based on adding some coupling (i.e., proportionality) between P and S models which stabilizes the V{sub P}/V{sub S} models around some average preset value determined from the data. Tests of the technique with synthetic data show that this additional coupling regularizes effectively the resulting models.« less
Fault zone structure determined through the analysis of earthquake arrival times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelini, Alberto
1991-10-01
This thesis develops and applies a technique for the simultaneous determination of P and S wave velocity models and hypocenters from a set of arrival times. The velocity models are parameterized in terms of cubic B-splines basis functions which permit the retrieval of smooth models that can be used directly for generation of synthetic seismograms using the ray method. In addition, this type of smoothing limits the rise of instabilities related to the poor resolving power of the data. V P/V S ratios calculated from P and S models display generally instabilities related to the different ray-coverages of compressional andmore » shear waves. However, V P/V S ratios are important for correct identification of rock types and this study introduces a new methodology based on adding some coupling (i.e., proportionality) between P and S models which stabilizes the V P/V S models around some average preset value determined from the data. Tests of the technique with synthetic data show that this additional coupling regularizes effectively the resulting models.« less
Micek, Agnieszka; Godos, Justyna; Lafranconi, Alessandra; Marranzano, Marina; Pajak, Andrzej
2018-06-01
To determine the association between total, caffeinated and decaffeinated coffee consumption and melanoma risk a dose-response meta-analysis on prospective cohort studies were performed. Eligible studies were identified searching PubMed and EMBASE databases from the earliest available online indexing year to March 2017. The dose-response relationship was assessed by random-effects meta-analysis and the shape of the exposure-outcome curve was modelled linearly and using restricted cubic splines. A total of seven studies eligible for meta-analysis were identified that comprised 1,418,779 participants and 9211 melanoma cases. A linear dose-response meta-analysis showed a significant association between total coffee consumption and melanoma risk. An increase in coffee consumption of one cup per day was associated with a 3% reduction in melanoma risk (RR 0.97; 95% CI 0.95-0.99). Our findings suggest that coffee intake may be inversely associated with incidence of melanoma. Nevertheless, further studies exploring also the role of confounding factors are needed to explain the heterogeneity among studies.
Restoring canonical partition functions from imaginary chemical potential
NASA Astrophysics Data System (ADS)
Bornyakov, V. G.; Boyda, D.; Goy, V.; Molochkov, A.; Nakamura, A.; Nikolaev, A.; Zakharov, V. I.
2018-03-01
Using GPGPU techniques and multi-precision calculation we developed the code to study QCD phase transition line in the canonical approach. The canonical approach is a powerful tool to investigate sign problem in Lattice QCD. The central part of the canonical approach is the fugacity expansion of the grand canonical partition functions. Canonical partition functions Zn(T) are coefficients of this expansion. Using various methods we study properties of Zn(T). At the last step we perform cubic spline for temperature dependence of Zn(T) at fixed n and compute baryon number susceptibility χB/T2 as function of temperature. After that we compute numerically ∂χ/∂T and restore crossover line in QCD phase diagram. We use improved Wilson fermions and Iwasaki gauge action on the 163 × 4 lattice with mπ/mρ = 0.8 as a sandbox to check the canonical approach. In this framework we obtain coefficient in parametrization of crossover line Tc(µ2B) = Tc(C-ĸµ2B/T2c) with ĸ = -0.0453 ± 0.0099.
Single-step collision-free trajectory planning of biped climbing robots in spatial trusses.
Zhu, Haifei; Guan, Yisheng; Chen, Shengjun; Su, Manjia; Zhang, Hong
For a biped climbing robot with dual grippers to climb poles, trusses or trees, feasible collision-free climbing motion is inevitable and essential. In this paper, we utilize the sampling-based algorithm, Bi-RRT, to plan single-step collision-free motion for biped climbing robots in spatial trusses. To deal with the orientation limit of a 5-DoF biped climbing robot, a new state representation along with corresponding operations including sampling, metric calculation and interpolation is presented. A simple but effective model of a biped climbing robot in trusses is proposed, through which the motion planning of one climbing cycle is transformed to that of a manipulator. In addition, the pre- and post-processes are introduced to expedite the convergence of the Bi-RRT algorithm and to ensure the safe motion of the climbing robot near poles as well. The piecewise linear paths are smoothed by utilizing cubic B-spline curve fitting. The effectiveness and efficiency of the presented Bi-RRT algorithm for climbing motion planning are verified by simulations.
Ionospheric modelling to boost the PPP-RTK positioning and navigation in Australia
NASA Astrophysics Data System (ADS)
Arsov, Kirco; Terkildsen, Michael; Olivares, German
2017-04-01
This paper deals with implementation of 3-D ionospheric model to support the GNSS positioning and navigation activities in Australia. We will introduce two strategies for Slant Total Electron Content (STEC) estimation from GNSS CORS sites in Australia. In the first scenario, the STEC is estimated in the PPP-RTK network processing. The ionosphere is estimated together with other GNSS network parameters, such as Satellite Clocks, Satellite Phase Biases, etc. Another approach is where STEC is estimated on a station by station basis by taking advantage of already known station position and different satellite ambiguities relations. Accuracy studies and considerations will be presented and discussed. Furthermore, based on this STEC, 3-D ionosphere modeling will be performed. We will present the simple interpolation, 3-D Tomography and bi-cubic splines as modeling techniques. In order to assess these models, a (user) PPP-RTK test bed is established and a sensitivity matrix will be introduced and analyzed based on time to first fix (TTFF) of ambiguities, positioning accuracy, PPP-RTK solution convergence time etc. Different spatial configurations and constellations will be presented and assessed.
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Slater, John W.; Henderson, Todd L.; Bidwell, Colin S.; Braun, Donald C.; Chung, Joongkee
1998-01-01
TURBO-GRD is a software system for interactive two-dimensional boundary/field grid generation. modification, and refinement. Its features allow users to explicitly control grid quality locally and globally. The grid control can be achieved interactively by using control points that the user picks and moves on the workstation monitor or by direct stretching and refining. The techniques used in the code are the control point form of algebraic grid generation, a damped cubic spline for edge meshing and parametric mapping between physical and computational domains. It also performs elliptic grid smoothing and free-form boundary control for boundary geometry manipulation. Internal block boundaries are constructed and shaped by using Bezier curve. Because TURBO-GRD is a highly interactive code, users can read in an initial solution, display its solution contour in the background of the grid and control net, and exercise grid modification using the solution contour as a guide. This process can be called an interactive solution-adaptive grid generation.
Ship Detection and Measurement of Ship Motion by Multi-Aperture Synthetic Aperture Radar
2014-06-01
Reconstructed periodic components of the Doppler histories shown in Fig. 27, (b) splined harmonic component amplitudes as a function of range...78 Figure 42: (a) Reconstructed periodic components of the Doppler histories shown in Figure 30, (b) Splined amplitudes of the...Figure 29 (b) Splined amplitudes of the harmonic components. ............................................ 79 Figure 44: Ship focusing by standard
Interactive Exploration of Big Scientific Data: New Representations and Techniques.
Hjelmervik, Jon M; Barrowclough, Oliver J D
2016-01-01
Although splines have been in popular use in CAD for more than half a century, spline research is still an active field, driven by the challenges we are facing today within isogeometric analysis and big data. Splines are likely to play a vital future role in enabling effective big data exploration techniques in 3D, 4D, and beyond.
NASA Astrophysics Data System (ADS)
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
Nonparametric triple collocation
USDA-ARS?s Scientific Manuscript database
Triple collocation derives variance-covariance relationships between three or more independent measurement sources and an indirectly observed truth variable in the case where the measurement operators are linear-Gaussian. We generalize that theory to arbitrary observation operators by deriving nonpa...
NASA Astrophysics Data System (ADS)
Agarwal, P.; El-Sayed, A. A.
2018-06-01
In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.
Long, Judith A; Wang, Andrew; Medvedeva, Elina L; Eisen, Susan V; Gordon, Adam J; Kreyenbuhl, Julie; Marcus, Steven C
2014-08-01
Persons with serious mental illness (SMI) may benefit from collocation of medical and mental health healthcare professionals and services in attending to their chronic comorbid medical conditions. We evaluated and compared glucose control and diabetes medication adherence among patients with SMI who received collocated care to those not receiving collocated care (which we call usual care). We performed a cross-sectional, observational cohort study of 363 veteran patients with type 2 diabetes and SMI who received care from one of three Veterans Affairs medical facilities: two sites that provided both collocated and usual care and one site that provided only usual care. Through a survey, laboratory tests, and medical records, we assessed patient characteristics, glucose control as measured by a current HbA1c, and adherence to diabetes medication as measured by the medication possession ration (MPR) and self-report. In the sample, the mean HbA1c was 7.4% (57 mmol/mol), the mean MPR was 80%, and 51% reported perfect adherence to their diabetes medications. In both unadjusted and adjusted analyses, there were no differences in glucose control and medication adherence by collocation of care. Patients seen in collocated care tended to have better HbA1c levels (β = -0.149; P = 0.393) and MPR values (β = 0.34; P = 0.132) and worse self-reported adherence (odds ratio 0.71; P = 0.143), but these were not statistically significant. In a population of veterans with comorbid diabetes and SMI, patients on average had good glucose control and medication adherence regardless of where they received primary care. © 2014 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.
ERIC Educational Resources Information Center
Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.
2017-01-01
Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…
NASA Astrophysics Data System (ADS)
Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei
2014-01-01
Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2003-01-01
Splines can be used to approximate noisy data with a few control points. This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of...
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-01-01
Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1985-01-01
Robustness properties are investigated for two types of controllers for large flexible space structures, which use collocated sensors and actuators. The first type is an attitude controller which uses negative definite feedback of measured attitude and rate, while the second type is a damping enhancement controller which uses only velocity (rate) feedback. It is proved that collocated attitude controllers preserve closed loop global asymptotic stability when linear actuator/sensor dynamics satisfying certain phase conditions are present, or monotonic increasing nonlinearities are present. For velocity feedback controllers, the global asymptotic stability is proved under much weaker conditions. In particular, they have 90 phase margin and can tolerate nonlinearities belonging to the (0,infinity) sector in the actuator/sensor characteristics. The results significantly enhance the viability of both types of collocated controllers, especially when the available information about the large space structure (LSS) parameters is inadequate or inaccurate.
Understanding a reference-free impedance method using collocated piezoelectric transducers
NASA Astrophysics Data System (ADS)
Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo
2010-03-01
A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.
NASA Astrophysics Data System (ADS)
Liao, Q.; Tchelepi, H.; Zhang, D.
2015-12-01
Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.
Development of quadrilateral spline thin plate elements using the B-net method
NASA Astrophysics Data System (ADS)
Chen, Juan; Li, Chong-Jun
2013-08-01
The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.
NASA Astrophysics Data System (ADS)
Blakely, Christopher D.
This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.
NASA Astrophysics Data System (ADS)
Zou, Z.; Scott, M. A.; Borden, M. J.; Thomas, D. C.; Dornisch, W.; Brivadis, E.
2018-05-01
In this paper we develop the isogeometric B\\'ezier dual mortar method. It is based on B\\'ezier extraction and projection and is applicable to any spline space which can be represented in B\\'ezier form (i.e., NURBS, T-splines, LR-splines, etc.). The approach weakly enforces the continuity of the solution at patch interfaces and the error can be adaptively controlled by leveraging the refineability of the underlying dual spline basis without introducing any additional degrees of freedom. We also develop weakly continuous geometry as a particular application of isogeometric B\\'ezier dual mortaring. Weakly continuous geometry is a geometry description where the weak continuity constraints are built into properly modified B\\'ezier extraction operators. As a result, multi-patch models can be processed in a solver directly without having to employ a mortaring solution strategy. We demonstrate the utility of the approach on several challenging benchmark problems. Keywords: Mortar methods, Isogeometric analysis, B\\'ezier extraction, B\\'ezier projection
A Novel Model to Simulate Flexural Complements in Compliant Sensor Systems
Tang, Hongyan; Zhang, Dan; Guo, Sheng; Qu, Haibo
2018-01-01
The main challenge in analyzing compliant sensor systems is how to calculate the large deformation of flexural complements. Our study proposes a new model that is called the spline pseudo-rigid-body model (spline PRBM). It combines dynamic spline and the pseudo-rigid-body model (PRBM) to simulate the flexural complements. The axial deformations of flexural complements are modeled by using dynamic spline. This makes it possible to consider the nonlinear compliance of the system using four control points. Three rigid rods connected by two revolute (R) pins with two torsion springs replace the three lines connecting the four control points. The kinematic behavior of the system is described using Lagrange equations. Both the optimization and the numerical fitting methods are used for resolving the characteristic parameters of the new model. An example is given of a compliant mechanism to modify the accuracy of the model. The spline PRBM is important in expanding the applications of the PRBM to the design and simulation of flexural force sensors. PMID:29596377
Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs
NASA Technical Reports Server (NTRS)
Howell, Lauren R.; Allen, B. Danette
2016-01-01
A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.
Multivariate spline methods in surface fitting
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator); Schumaker, L. L.
1984-01-01
The use of spline functions in the development of classification algorithms is examined. In particular, a method is formulated for producing spline approximations to bivariate density functions where the density function is decribed by a histogram of measurements. The resulting approximations are then incorporated into a Bayesiaan classification procedure for which the Bayes decision regions and the probability of misclassification is readily computed. Some preliminary numerical results are presented to illustrate the method.
Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi; Keller, Jonathan; Errichello, Robert
2013-12-01
Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliabilitymore » using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.« less
Polynomials to model the growth of young bulls in performance tests.
Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B
2014-03-01
The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.
Modeling respiratory mechanics in the MCAT and spline-based MCAT phantoms
NASA Astrophysics Data System (ADS)
Segars, W. P.; Lalush, D. S.; Tsui, B. M. W.
2001-02-01
Respiratory motion can cause artifacts in myocardial SPECT and computed tomography (CT). The authors incorporate models of respiratory mechanics into the current 4D MCAT and into the next generation spline-based MCAT phantoms. In order to simulate respiratory motion in the current MCAT phantom, the geometric solids for the diaphragm, heart, ribs, and lungs were altered through manipulation of parameters defining them. Affine transformations were applied to the control points defining the same respiratory structures in the spline-based MCAT phantom to simulate respiratory motion. The Non-Uniform Rational B-Spline (NURBS) surfaces for the lungs and body outline were constructed in such a way as to be linked to the surrounding ribs. Expansion and contraction of the thoracic cage then coincided with expansion and contraction of the lungs and body. The changes both phantoms underwent were spline-interpolated over time to create time continuous 4D respiratory models. The authors then used the geometry-based and spline-based MCAT phantoms in an initial simulation study of the effects of respiratory motion on myocardial SPECT. The simulated reconstructed images demonstrated distinct artifacts in the inferior region of the myocardium. It is concluded that both respiratory models can be effective tools for researching effects of respiratory motion.
NASA Astrophysics Data System (ADS)
Simpson, R. N.; Liu, Z.; Vázquez, R.; Evans, J. A.
2018-06-01
We outline the construction of compatible B-splines on 3D surfaces that satisfy the continuity requirements for electromagnetic scattering analysis with the boundary element method (method of moments). Our approach makes use of Non-Uniform Rational B-splines to represent model geometry and compatible B-splines to approximate the surface current, and adopts the isogeometric concept in which the basis for analysis is taken directly from CAD (geometry) data. The approach allows for high-order approximations and crucially provides a direct link with CAD data structures that allows for efficient design workflows. After outlining the construction of div- and curl-conforming B-splines defined over 3D surfaces we describe their use with the electric and magnetic field integral equations using a Galerkin formulation. We use Bézier extraction to accelerate the computation of NURBS and B-spline terms and employ H-matrices to provide accelerated computations and memory reduction for the dense matrices that result from the boundary integral discretization. The method is verified using the well known Mie scattering problem posed over a perfectly electrically conducting sphere and the classic NASA almond problem. Finally, we demonstrate the ability of the approach to handle models with complex geometry directly from CAD without mesh generation.
Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A
2016-10-01
Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. © The Author(s) 2013.
Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S.; Barros, Aluísio JD; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A
2013-01-01
Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. PMID:24108269
Examination of wrist and hip actigraphy using a novel sleep estimation procedure☆
Ray, Meredith A.; Youngstedt, Shawn D.; Zhang, Hongmei; Robb, Sara Wagner; Harmon, Brook E.; Jean-Louis, Girardin; Cai, Bo; Hurley, Thomas G.; Hébert, James R.; Bogan, Richard K.; Burch, James B.
2014-01-01
Objective Improving and validating sleep scoring algorithms for actigraphs enhances their usefulness in clinical and research applications. The MTI® device (ActiGraph, Pensacola, FL) had not been previously validated for sleep. The aims were to (1) compare the accuracy of sleep metrics obtained via wrist- and hip-mounted MTI® actigraphs with polysomnographic (PSG) recordings in a sample that included both normal sleepers and individuals with presumed sleep disorders; and (2) develop a novel sleep scoring algorithm using spline regression to improve the correspondence between the actigraphs and PSG. Methods Original actigraphy data were amplified and their pattern was estimated using a penalized spline. The magnitude of amplification and the spline were estimated by minimizing the difference in sleep efficiency between wrist- (hip-) actigraphs and PSG recordings. Sleep measures using both the original and spline-modified actigraphy data were compared to PSG using the following: mean sleep summary measures; Spearman rank-order correlations of summary measures; percent of minute-by-minute agreement; sensitivity and specificity; and Bland–Altman plots. Results The original wrist actigraphy data showed modest correspondence with PSG, and much less correspondence was found between hip actigraphy and PSG. The spline-modified wrist actigraphy produced better approximations of interclass correlations, sensitivity, and mean sleep summary measures relative to PSG than the original wrist actigraphy data. The spline-modified hip actigraphy provided improved correspondence, but sleep measures were still not representative of PSG. Discussion The results indicate that with some refinement, the spline regression method has the potential to improve sleep estimates obtained using wrist actigraphy. PMID:25580202
Lux, C J; Rübel, J; Starke, J; Conradt, C; Stellzig, P A; Komposch, P G
2001-04-01
The aim of the present longitudinal cephalometric study was to evaluate the dentofacial shape changes induced by activator treatment between 9.5 and 11.5 years in male Class II patients. For a rigorous morphometric analysis, a thin-plate spline analysis was performed to assess and visualize dental and skeletal craniofacial changes. Twenty male patients with a skeletal Class II malrelationship and increased overjet who had been treated at the University of Heidelberg with a modified Andresen-Häupl-type activator were compared with a control group of 15 untreated male subjects of the Belfast Growth Study. The shape changes for each group were visualized on thin-plate splines with one spline comprising all 13 landmarks to show all the craniofacial shape changes, including skeletal and dento-alveolar reactions, and a second spline based on 7 landmarks to visualize only the skeletal changes. In the activator group, the grid deformation of the total spline pointed to a strong activator-induced reduction of the overjet that was caused both by a tipping of the incisors and by a moderation of sagittal discrepancies, particularly a slight advancement of the mandible. In contrast with this, in the control group, only slight localized shape changes could be detected. Both in the 7- and 13-landmark configurations, the shape changes between the groups differed significantly at P < .001. In the present study, the morphometric approach of thin-plate spline analysis turned out to be a useful morphometric supplement to conventional cephalometrics because the complex patterns of shape change could be suggestively visualized.
NASA Astrophysics Data System (ADS)
Buaria, D.; Yeung, P. K.
2017-12-01
A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on major compilers suggests that this algorithm will be of wider applicability on most upcoming supercomputers.
Churpek, Matthew M; Yuen, Trevor C; Winslow, Christopher; Meltzer, David O; Kattan, Michael W; Edelson, Dana P
2016-02-01
Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. Observational cohort study. Five hospitals, from November 2008 until January 2013. Hospitalized ward patients None Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score, a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC, 0.80 [95% CI, 0.80-0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC, 0.77 vs 0.74; p < 0.01), and all models were more accurate than the MEWS (AUC, 0.70 [95% CI, 0.70-0.70]). In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards.
Matthews, Charles E; Keadle, Sarah Kozey; Troiano, Richard P; Kahle, Lisa; Koster, Annemarie; Brychta, Robert; Van Domelen, Dane; Caserotti, Paolo; Chen, Kong Y; Harris, Tamara B; Berrigan, David
2016-11-01
Moderate-to-vigorous-intensity physical activity is recommended to maintain and improve health, but the mortality benefits of light activity and risk for sedentary time remain uncertain. Using accelerometer-based measures, we 1) described the mortality dose-response for sedentary time and light- and moderate-to-vigorous-intensity activity using restricted cubic splines, and 2) estimated the mortality benefits associated with replacing sedentary time with physical activity, accounting for total activity. US adults (n = 4840) from NHANES (2003-2006) wore an accelerometer for ≤7 d and were followed prospectively for mortality. Proportional hazards models were used to estimate adjusted HRs and 95% CIs for mortality associations with time spent sedentary and in light- and moderate-to-vigorous-intensity physical activity. Splines were used to graphically present behavior-mortality relation. Isotemporal models estimated replacement associations for sedentary time, and separate models were fit for low- (<5.8 h total activity/d) and high-active participants to account for nonlinear associations. Over a mean of 6.6 y, 700 deaths occurred. Compared with less-sedentary adults (6 sedentary h/d), those who spent 10 sedentary h/d had 29% greater risk (HR: 1.29; 95% CI: 1.1, 1.5). Compared with those who did less light activity (3 h/d), those who did 5 h of light activity/d had 23% lower risk (HR: 0.77; 95% CI: 0.6, 1.0). There was no association with mortality for sedentary time or light or moderate-to-vigorous activity in highly active adults. In less-active adults, replacing 1 h of sedentary time with either light- or moderate-to-vigorous-intensity activity was associated with 18% and 42% lower mortality, respectively. Health promotion efforts for physical activity have mostly focused on moderate-to-vigorous activity. However, our findings derived from accelerometer-based measurements suggest that increasing light-intensity activity and reducing sedentary time are also important, particularly for inactive adults. © 2016 American Society for Nutrition.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
On the anomaly of velocity-pressure decoupling in collocated mesh solutions
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook; Vanoverbeke, Thomas
1991-01-01
The use of various pressure correction algorithms originally developed for fully staggered meshes can yield a velocity-pressure decoupled solution for collocated meshes. The mechanism that causes velocity-pressure decoupling is identified. It is shown that the use of a partial differential equation for the incremental pressure eliminates such a mechanism and yields a velocity-pressure coupled solution. Example flows considered are a three dimensional lid-driven cavity flow and a laminar flow through a 90 deg bend square duct. Numerical results obtained using the collocated mesh are in good agreement with the measured data and other numerical results.
Pereira, Félix Monteiro; Oliveira, Samuel Conceição
2016-11-01
In this article, the occurrence of dead core in catalytic particles containing immobilized enzymes is analyzed for the Michaelis-Menten kinetics. An assessment of numerical methods is performed to solve the boundary value problem generated by the mathematical modeling of diffusion and reaction processes under steady state and isothermal conditions. Two classes of numerical methods were employed: shooting and collocation. The shooting method used the ode function from Scilab software. The collocation methods included: that implemented by the bvode function of Scilab, the orthogonal collocation, and the orthogonal collocation on finite elements. The methods were validated for simplified forms of the Michaelis-Menten equation (zero-order and first-order kinetics), for which analytical solutions are available. Among the methods covered in this article, the orthogonal collocation on finite elements proved to be the most robust and efficient method to solve the boundary value problem concerning Michaelis-Menten kinetics. For this enzyme kinetics, it was found that the dead core can occur when verified certain conditions of diffusion-reaction within the catalytic particle. The application of the concepts and methods presented in this study will allow for a more generalized analysis and more accurate designs of heterogeneous enzymatic reactors.
Numerical study of the flow in a three-dimensional thermally driven cavity
NASA Astrophysics Data System (ADS)
Rauwoens, Pieter; Vierendeels, Jan; Merci, Bart
2008-06-01
Solutions for the fully compressible Navier-Stokes equations are presented for the flow and temperature fields in a cubic cavity with large horizontal temperature differences. The ideal-gas approximation for air is assumed and viscosity is computed using Sutherland's law. The three-dimensional case forms an extension of previous studies performed on a two-dimensional square cavity. The influence of imposed boundary conditions in the third dimension is investigated as a numerical experiment. Comparison is made between convergence rates in case of periodic and free-slip boundary conditions. Results with no-slip boundary conditions are presented as well. The effect of the Rayleigh number is studied. Results are computed using a finite volume method on a structured, collocated grid. An explicit third-order discretization for the convective part and an implicit central discretization for the acoustic part and for the diffusive part are used. To stabilize the scheme an artificial dissipation term for the pressure and the temperature is introduced. The discrete equations are solved using a time-marching method with restrictions on the timestep corresponding to the explicit parts of the solver. Multigrid is used as acceleration technique.
Enhancement of surface definition and gridding in the EAGLE code
NASA Technical Reports Server (NTRS)
Thompson, Joe F.
1991-01-01
Algorithms for smoothing of curves and surfaces for the EAGLE grid generation program are presented. The method uses an existing automated technique which detects undesirable geometric characteristics by using a local fairness criterion. The geometry entity is then smoothed by repeated removal and insertion of spline knots in the vicinity of the geometric irregularity. The smoothing algorithm is formulated for use with curves in Beta spline form and tensor product B-spline surfaces.
2014-10-26
From the parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow...field-based method [7, 12] to generate adaptive and anisotropic quadrilateral meshes, which can be used as the control mesh for high-order T- spline ...parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow-based
Isogeometric Analysis of Boundary Integral Equations
2015-04-21
methods, IgA relies on Non-Uniform Rational B- splines (NURBS) [43, 46], T- splines [55, 53] or subdivision surfaces [21, 48, 51] rather than piece- wise...structural dynamics [25, 26], plates and shells [15, 16, 27, 28, 37, 22, 23], phase-field models [17, 32, 33], and shape optimization [40, 41, 45, 59...polynomials for approximating the geometry and field variables. Thus, by replacing piecewise polynomials with NURBS or T- splines , one can develop
2014-02-01
installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND
A multidomain spectral collocation method for the Stokes problem
NASA Technical Reports Server (NTRS)
Landriani, G. Sacchi; Vandeven, H.
1989-01-01
A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.
Evaluation of assumptions in soil moisture triple collocation analysis
USDA-ARS?s Scientific Manuscript database
Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...
Beyond triple collocation: Applications to satellite soil moisture
USDA-ARS?s Scientific Manuscript database
Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...
Evaluating Remotely-Sensed Surface Soil Moisture Estimates Using Triple Collocation
USDA-ARS?s Scientific Manuscript database
Recent work has demonstrated the potential of enhancing remotely-sensed surface soil moisture validation activities through the application of triple collocation techniques which compare time series of three mutually independent geophysical variable estimates in order to acquire the root-mean-square...
2009-08-01
the measurements of Jung et al [3], ’BSR’ to the Breit- Pauli B-Spline ft-matrix method, and ’RDW to the relativistic distorted wave method. low...excitation cross sections using both relativistic distorted wave and semi-relativistic Breit- Pauli B-Spline R-matrix methods is presented. The model...population and line intensity enhancement. 15. SUBJECT TERMS Metastable xenon Electrostatic thruster Relativistic Breit- Pauli b-spline matrix
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1994-01-01
Scientific data often contains random errors that make plotting and curve-fitting difficult. The Rational-Spline Approximation with Automatic Tension Adjustment algorithm lead to a flexible, smooth representation of experimental data. The user sets the conditions for each consecutive pair of knots:(knots are user-defined divisions in the data set) to apply no tension; to apply fixed tension; or to determine tension with a tension adjustment algorithm. The user also selects the number of knots, the knot abscissas, and the allowed maximum deviations from line segments. The selection of these quantities depends on the actual data and on the requirements of a particular application. This program differs from the usual spline under tension in that it allows the user to specify different tension values between each adjacent pair of knots rather than a constant tension over the entire data range. The subroutines use an automatic adjustment scheme that varies the tension parameter for each interval until the maximum deviation of the spline from the line joining the knots is less than or equal to a user-specified amount. This procedure frees the user from the drudgery of adjusting individual tension parameters while still giving control over the local behavior of the spline The Rational Spline program was written completely in FORTRAN for implementation on a CYBER 850 operating under NOS. It has a central memory requirement of approximately 1500 words. The program was released in 1988.
Pseudospectral collocation methods for fourth order differential equations
NASA Technical Reports Server (NTRS)
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-25
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krogel, Jaron T.; Reboredo, Fernando A.
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Thin-plate spline quadrature of geodetic integrals
NASA Technical Reports Server (NTRS)
Vangysen, Herman
1989-01-01
Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.
SU-E-J-89: Deformable Registration Method Using B-TPS in Radiotherapy.
Xie, Y
2012-06-01
A novel deformable registration method for four-dimensional computed tomography (4DCT) images is developed in radiation therapy. The proposed method combines the thin plate spline (TPS) and B-spline together to achieve high accuracy and high efficiency. The method consists of two steps. First, TPS is used as a global registration method to deform large unfit regions in the moving image to match counterpart in the reference image. Then B-spline is used for local registration, the previous deformed moving image is further deformed to match the reference image more accurately. Two clinical CT image sets, including one pair of lung and one pair of liver, are simulated using the proposed algorithm, which results in a tremendous improvement in both run-time and registration quality, compared with the conventional methods solely using either TPS or B-spline. The proposed method can combine the efficiency of TPS and the accuracy of B-spline, performing good adaptively and robust in registration of clinical 4DCT image. © 2012 American Association of Physicists in Medicine.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-01
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.
Some spectral approximation of one-dimensional fourth-order problems
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Maday, Yvon
1989-01-01
Some spectral type collocation method well suited for the approximation of fourth-order systems are proposed. The model problem is the biharmonic equation, in one and two dimensions when the boundary conditions are periodic in one direction. It is proved that the standard Gauss-Lobatto nodes are not the best choice for the collocation points. Then, a new set of nodes related to some generalized Gauss type quadrature formulas is proposed. Also provided is a complete analysis of these formulas including some new issues about the asymptotic behavior of the weights and we apply these results to the analysis of the collocation method.
Recent advances in (soil moisture) triple collocation analysis
USDA-ARS?s Scientific Manuscript database
To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.
1987-01-01
This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.
A Comparative Usage-Based Approach to the Reduction of the Spanish and Portuguese Preposition "Para"
ERIC Educational Resources Information Center
Gradoville, Michael Stephen
2013-01-01
This study examines the frequency effect of two-word collocations involving "para" "to," "for" (e.g. "fui para," "para que") on the reduction of "para" to "pa" (in Spanish) and "pra" (in Portuguese). Collocation frequency effects demonstrate that language speakers…
Quadratures with multiple nodes, power orthogonality, and moment-preserving spline approximation
NASA Astrophysics Data System (ADS)
Milovanovic, Gradimir V.
2001-01-01
Quadrature formulas with multiple nodes, power orthogonality, and some applications of such quadratures to moment-preserving approximation by defective splines are considered. An account on power orthogonality (s- and [sigma]-orthogonal polynomials) and generalized Gaussian quadratures with multiple nodes, including stable algorithms for numerical construction of the corresponding polynomials and Cotes numbers, are given. In particular, the important case of Chebyshev weight is analyzed. Finally, some applications in moment-preserving approximation of functions by defective splines are discussed.
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
NASA Technical Reports Server (NTRS)
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
An Adaptive MR-CT Registration Method for MRI-guided Prostate Cancer Radiotherapy
Zhong, Hualiang; Wen, Ning; Gordon, James; Elshaikh, Mohamed A; Movsas, Benjamin; Chetty, Indrin J.
2015-01-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ/cm3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for development of high-quality MRI-guided radiation therapy. PMID:25775937
NASA Astrophysics Data System (ADS)
Bogunović, Igor; Pereira, Paulo; Đurđević, Boris
2017-04-01
Information on spatial distribution of soil nutrients in agroecosystems is critical for improving productivity and reducing environmental pressures in intensive farmed soils. In this context, spatial prediction of soil properties should be accurate. In this study we analyse 704 data of soil available phosphorus (AP) and potassium (AK); the data derive from soil samples collected across three arable fields in Baranja region (Croatia) in correspondence of different soil types: Cambisols (169 samples), Chernozems (131 samples) and Gleysoils (404 samples). The samples are collected in a regular sampling grid (distance 225 x 225 m). Several geostatistical techniques (Inverse Distance to a Weight (IDW) with the power of 1, 2 and 3; Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multiquadratic (MTQ), Completely Regularized Spline (CRS), Spline with Tension (SPT) and Thin Plate Spline (TPS); and Local Polynomial (LP) with the power of 1 and 2; two geostatistical techniques -Ordinary Kriging - OK and Simple Kriging - SK) were tested in order to evaluate the most accurate spatial variability maps using criteria of lowest RMSE during cross validation technique. Soil parameters varied considerably throughout the studied fields and their coefficient of variations ranged from 31.4% to 37.7% and from 19.3% to 27.1% for soil AP and AK, respectively. The experimental variograms indicate a moderate spatial dependence for AP and strong spatial dependence for all three locations. The best spatial predictor for AP at Chernozem field was Simple kriging (RMSE=61.711), and for AK inverse multiquadratic (RMSE=44.689). The least accurate technique was Thin plate spline (AP) and Inverse distance to a weight with a power of 1 (AK). Radial basis function models (Spline with Tension for AP at Gleysoil and Cambisol and Completely Regularized Spline for AK at Gleysol) were the best predictors, while Thin Plate Spline models were the least accurate in all three cases. The best interpolator for AK at Cambisol was the local polynomial with the power of 2 (RMSE=33.943), while the least accurate was Thin Plate Spline (RMSE=39.572).
An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.
2015-04-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for the development of high-quality MRI-guided radiation therapy.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Reports 10, The Yugoslav Serbo-Croatian-English Contrastive Project.
ERIC Educational Resources Information Center
Filipovic, Rudolf
The tenth volume in this series contains five articles dealing with various aspects of Serbo-Croatian-English contrastive analysis. They are: "The Infinitive as Subject in English and Serbo-Croatian," by Ljiljana Bibovic; "The Contrastive Analysis of Collocations: Collocational Ranges of "Make" and "Take" with Nouns and Their Serbo-Croatian…
No Silver Bullet: L2 Collocation Instruction in an Advanced Spanish Classroom
ERIC Educational Resources Information Center
Jensen, Eric Carl
2017-01-01
Many contemporary second language (L2) instructional materials feature collocation exercises; however, few studies have verified their effectiveness (Boers, Demecheleer, Coxhead, & Webb, 2014) or whether these exercises can be utilized for target languages beyond English (Higueras García, 2017). This study addresses these issues by…
Assessing Team Learning in Technology-Mediated Collaboration: An Experimental Study
ERIC Educational Resources Information Center
Andres, Hayward P.; Akan, Obasi H.
2010-01-01
This study examined the effects of collaboration mode (collocated versus non-collocated videoconferencing-mediated) on team learning and team interaction quality in a team-based problem solving context. Situated learning theory and the theory of affordances are used to provide a framework that describes how technology-mediated collaboration…
Collocation in Regional Development--The Peel Education and TAFE Response.
ERIC Educational Resources Information Center
Goff, Malcolm H.; Nevard, Jennifer
The collocation of services in regional Western Australia (WA) is an important strand of WA's regional development policy. The initiative is intended to foster working relationships among stakeholder groups with a view toward ensuring that regional WA communities have access to quality services. Clustering compatible services in smaller…
Interlanguage Development and Collocational Clash
ERIC Educational Resources Information Center
Shahheidaripour, Gholamabbass
2000-01-01
Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... unbundled network element if and only if the primary purpose and function of the equipment, as the... nondiscriminatory access to that unbundled network element, including any of its features, functions, or... must be a logical nexus between the additional functions the equipment would perform and the...
Testing ESL Learners' Knowledge of Collocations.
ERIC Educational Resources Information Center
Bonk, William J.
This study reports on the development, administration, and analysis of a test of collocational knowledge for English-as-a-Second-Language (ESL) learners of a wide range of proficiency levels. Through native speaker item validation and pilot testing, three subtests were developed and administered to 98 ESL learners of low-intermediate to advanced…
Modeling terminal ballistics using blending-type spline surfaces
NASA Astrophysics Data System (ADS)
Pedersen, Aleksander; Bratlie, Jostein; Dalmo, Rune
2014-12-01
We explore using GERBS, a blending-type spline construction, to represent deform able thin-plates and model terminal ballistics. Strategies to construct geometry for different scenarios of terminal ballistics are proposed.
Quadratic trigonometric B-spline for image interpolation using GA
Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906
Quadratic trigonometric B-spline for image interpolation using GA.
Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.
Illumination estimation via thin-plate spline interpolation.
Shi, Lilong; Xiong, Weihua; Funt, Brian
2011-05-01
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
Elastostatic stress analysis of orthotropic rectangular center-cracked plates
NASA Technical Reports Server (NTRS)
Gyekenyesi, G. S.; Mendelson, A.
1972-01-01
A mapping-collocation method was developed for the elastostatic stress analysis of finite, anisotropic plates with centrally located traction-free cracks. The method essentially consists of mapping the crack into the unit circle and satisfying the crack boundary conditions exactly with the help of Muskhelishvili's function extension concept. The conditions on the outer boundary are satisfied approximately by applying the method of least-squares boundary collocation. A parametric study of finite-plate stress intensity factors, employing this mapping-collocation method, is presented. It shows the effects of varying material properties, orientation angle, and crack-length-to-plate-width and plate-height-to-plate-width ratios for rectangular orthotropic plates under constant tensile and shear loads.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise
2003-01-01
This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.
2013-09-01
In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.
Effect of interpolation on parameters extracted from seating interface pressure arrays.
Wininger, Michael; Crane, Barbara
2014-01-01
Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.
On the geodetic applications of simultaneous range-differencing to LAGEOS
NASA Technical Reports Server (NTRS)
Pablis, E. C.
1982-01-01
The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.
Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors
Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus
2014-01-01
Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications. PMID:24678281
NASA Astrophysics Data System (ADS)
Rura, Christopher; Stollberg, Mark
2018-01-01
The Astronomical Almanac is an annual publication of the US Naval Observatory (USNO) and contains a wide variety of astronomical data used by astronomers worldwide as a general reference or for planning observations. Included in this almanac are the times of greatest eastern and northern elongations of the natural satellites of the planets, accurate to 0.1 hour UT. The production code currently used to determine elongation times generates X and Y coordinates for each satellite (16 total) in 5 second intervals. This consequentially caused very large data files, and resulted in the program devoted to determining the elongation times to be computationally intensive. To make this program more efficient, we wrote a Python program to fit a cubic spline to data generated with a 6-minute time step. This resulted in elongation times that were found to agree with those determined from the 5 second data currently used in a large number of cases and was tested for 16 satellites between 2017 and 2019. The accuracy of this program is being tested for the years past 2019 and, if no problems are found, the code will be considered for production of this section of The Astronomical Almanac.
NASA Astrophysics Data System (ADS)
Gábor Hatvani, István; Kern, Zoltán; Leél-Őssy, Szabolcs; Demény, Attila
2018-01-01
Uneven spacing is a common feature of sedimentary paleoclimate records, in many cases causing difficulties in the application of classical statistical and time series methods. Although special statistical tools do exist to assess unevenly spaced data directly, the transformation of such data into a temporally equidistant time series which may then be examined using commonly employed statistical tools remains, however, an unachieved goal. The present paper, therefore, introduces an approach to obtain evenly spaced time series (using cubic spline fitting) from unevenly spaced speleothem records with the application of a spectral guidance to avoid the spectral bias caused by interpolation and retain the original spectral characteristics of the data. The methodology was applied to stable carbon and oxygen isotope records derived from two stalagmites from the Baradla Cave (NE Hungary) dating back to the late 18th century. To show the benefit of the equally spaced records to climate studies, their coherence with climate parameters is explored using wavelet transform coherence and discussed. The obtained equally spaced time series are available at https://doi.org/10.1594/PANGAEA.875917.
Chemical Shift Encoded Water–Fat Separation Using Parallel Imaging and Compressed Sensing
Sharma, Samir D.; Hu, Houchun H.; Nayak, Krishna S.
2013-01-01
Chemical shift encoded techniques have received considerable attention recently because they can reliably separate water and fat in the presence of off-resonance. The insensitivity to off-resonance requires that data be acquired at multiple echo times, which increases the scan time as compared to a single echo acquisition. The increased scan time often requires that a compromise be made between the spatial resolution, the volume coverage, and the tolerance to artifacts from subject motion. This work describes a combined parallel imaging and compressed sensing approach for accelerated water–fat separation. In addition, the use of multiscale cubic B-splines for B0 field map estimation is introduced. The water and fat images and the B0 field map are estimated via an alternating minimization. Coil sensitivity information is derived from a calculated k-space convolution kernel and l1-regularization is imposed on the coil-combined water and fat image estimates. Uniform water–fat separation is demonstrated from retrospectively undersampled data in the liver, brachial plexus, ankle, and knee as well as from a prospectively undersampled acquisition of the knee at 8.6x acceleration. PMID:22505285
SPLASH program for three dimensional fluid dynamics with free surface boundaries
NASA Astrophysics Data System (ADS)
Yamaguchi, A.
1996-05-01
This paper describes a three dimensional computer program SPLASH that solves Navier-Stokes equations based on the Arbitrary Lagrangian Eulerian (ALE) finite element method. SPLASH has been developed for application to the fluid dynamics problems including the moving boundary of a liquid metal cooled Fast Breeder Reactor (FBR). To apply SPLASH code to the free surface behavior analysis, a capillary model using a cubic Spline function has been developed. Several sample problems, e.g., free surface oscillation, vortex shedding development, and capillary tube phenomena, are solved to verify the computer program. In the analyses, the numerical results are in good agreement with the theoretical value or experimental observance. Also SPLASH code has been applied to an analysis of a free surface sloshing experiment coupled with forced circulation flow in a rectangular tank. This is a simplified situation of the flow field in a reactor vessel of the FBR. The computational simulation well predicts the general behavior of the fluid flow inside and the free surface behavior. Analytical capability of the SPLASH code has been verified in this study and the application to more practical problems such as FBR design and safety analysis is under way.
Distributed Sensing and Shape Control of Piezoelectric Bimorph Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redmond, James M.; Barney, Patrick S.; Henson, Tammy D.
1999-07-28
As part of a collaborative effort between Sandia National Laboratories and the University of Kentucky to develop a deployable mirror for remote sensing applications, research in shape sensing and control algorithms that leverage the distributed nature of electron gun excitation for piezoelectric bimorph mirrors is summarized. A coarse shape sensing technique is developed that uses reflected light rays from the sample surface to provide discrete slope measurements. Estimates of surface profiles are obtained with a cubic spline curve fitting algorithm. Experiments on a PZT bimorph illustrate appropriate deformation trends as a function of excitation voltage. A parallel effort to effectmore » desired shape changes through electron gun excitation is also summarized. A one dimensional model-based algorithm is developed to correct profile errors in bimorph beams. A more useful two dimensional algorithm is also developed that relies on measured voltage-curvature sensitivities to provide corrective excitation profiles for the top and bottom surfaces of bimorph plates. The two algorithms are illustrated using finite element models of PZT bimorph structures subjected to arbitrary disturbances. Corrective excitation profiles that yield desired parabolic forms are computed, and are shown to provide the necessary corrective action.« less
SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)
Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik
2013-01-01
We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
Automatic control system for uniformly paving iron ore pellets
NASA Astrophysics Data System (ADS)
Wang, Bowen; Qian, Xiaolong
2014-05-01
In iron and steelmaking industry, iron ore pellet qualities are crucial to end-product properties, manufacturing costs and waste emissions. Uniform pellet pavements on the grate machine are a fundamental prerequisite to ensure even heat-transfer and pellet induration successively influences performance of the following metallurgical processes. This article presents an automatic control system for uniformly paving green pellets on the grate, via a mechanism mainly constituted of a mechanical linkage, a swinging belt, a conveyance belt and a grate. Mechanism analysis illustrates that uniform pellet pavements demand the frontend of the swinging belt oscillate at a constant angular velocity. Subsequently, kinetic models are formulated to relate oscillatory movements of the swinging belt's frontend to rotations of a crank link driven by a motor. On basis of kinetic analysis of the pellet feeding mechanism, a cubic B-spline model is built for numerically computing discrete frequencies to be modulated during a motor rotation. Subsequently, the pellet feeding control system is presented in terms of compositional hardware and software components, and their functional relationships. Finally, pellet feeding experiments are carried out to demonstrate that the control system is effective, reliable and superior to conventional methods.
StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.
2018-05-01
Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.
Empirical mode decomposition and neural networks on FPGA for fault diagnosis in induction motors.
Camarena-Martinez, David; Valtierra-Rodriguez, Martin; Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus
2014-01-01
Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications.
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
3D Segmentation with an application of level set-method using MRI volumes for image guided surgery.
Bosnjak, A; Montilla, G; Villegas, R; Jara, I
2007-01-01
This paper proposes an innovation in the application for image guided surgery using a comparative study of three different method of segmentation. This segmentation method is faster than the manual segmentation of images, with the advantage that it allows to use the same patient as anatomical reference, which has more precision than a generic atlas. This new methodology for 3D information extraction is based on a processing chain structured of the following modules: 1) 3D Filtering: the purpose is to preserve the contours of the structures and to smooth the homogeneous areas; several filters were tested and finally an anisotropic diffusion filter was used. 2) 3D Segmentation. This module compares three different methods: Region growing Algorithm, Cubic spline hand assisted, and Level Set Method. It then proposes a Level Set-based on the front propagation method that allows the making of the reconstruction of the internal walls of the anatomical structures of the brain. 3) 3D visualization. The new contribution of this work consists on the visualization of the segmented model and its use in the pre-surgery planning.
Zhang, Chao; Jia, Pengli; Yu, Liu; Xu, Chang
2018-05-01
Dose-response meta-analysis (DRMA) is widely applied to investigate the dose-specific relationship between independent and dependent variables. Such methods have been in use for over 30 years and are increasingly employed in healthcare and clinical decision-making. In this article, we give an overview of the methodology used in DRMA. We summarize the commonly used regression model and the pooled method in DRMA. We also use an example to illustrate how to employ a DRMA by these methods. Five regression models, linear regression, piecewise regression, natural polynomial regression, fractional polynomial regression, and restricted cubic spline regression, were illustrated in this article to fit the dose-response relationship. And two types of pooling approaches, that is, one-stage approach and two-stage approach are illustrated to pool the dose-response relationship across studies. The example showed similar results among these models. Several dose-response meta-analysis methods can be used for investigating the relationship between exposure level and the risk of an outcome. However the methodology of DRMA still needs to be improved. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
NASA Astrophysics Data System (ADS)
Liu, Ying; Xu, Zhenhuan; Li, Yuguo
2018-04-01
We present a goal-oriented adaptive finite element (FE) modelling algorithm for 3-D magnetotelluric fields in generally anisotropic conductivity media. The model consists of a background layered structure, containing anisotropic blocks. Each block and layer might be anisotropic by assigning to them 3 × 3 conductivity tensors. The second-order partial differential equations are solved using the adaptive finite element method (FEM). The computational domain is subdivided into unstructured tetrahedral elements, which allow for complex geometries including bathymetry and dipping interfaces. The grid refinement process is guided by a global posteriori error estimator and is performed iteratively. The system of linear FE equations for electric field E is solved with a direct solver MUMPS. Then the magnetic field H can be found, in which the required derivatives are computed numerically using cubic spline interpolation. The 3-D FE algorithm has been validated by comparisons with both the 3-D finite-difference solution and 2-D FE results. Two model types are used to demonstrate the effects of anisotropy upon 3-D magnetotelluric responses: horizontal and dipping anisotropy. Finally, a 3D sea hill model is modelled to study the effect of oblique interfaces and the dipping anisotropy.
Wang, Wei; Albert, Jeffrey M
2017-08-01
An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack
2018-04-01
A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
A thin-plate spline analysis of the face and tongue in obstructive sleep apnea patients.
Pae, E K; Lowe, A A; Fleetham, J A
1997-12-01
The shape characteristics of the face and tongue in obstructive sleep apnea (OSA) patients were investigated using thin-plate (TP) splines. A relatively new analytic tool, the TP spline method, provides a means of size normalization and image analysis. When shape is one's main concern, various sizes of a biologic structure may be a source of statistical noise. More seriously, the strong size effect could mask underlying, actual attributes of the disease. A set of size normalized data in the form of coordinates was generated from cephalograms of 80 male subjects. The TP spline method envisioned the differences in the shape of the face and tongue between OSA patients and nonapneic subjects and those between the upright and supine body positions. In accordance with OSA severity, the hyoid bone and the submental region positioned inferiorly and the fourth vertebra relocated posteriorly with respect to the mandible. This caused a fanlike configuration of the lower part of the face and neck in the sagittal plane in both upright and supine body positions. TP splines revealed tongue deformations caused by a body position change. Overall, the new morphometric tool adopted here was found to be viable in the analysis of morphologic changes.
ERIC Educational Resources Information Center
Gyllstad, Henrik; Wolter, Brent
2016-01-01
The present study investigates whether two types of word combinations (free combinations and collocations) differ in terms of processing by testing Howarth's Continuum Model based on word combination typologies from a phraseological tradition. A visual semantic judgment task was administered to advanced Swedish learners of English (n = 27) and…
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... unbundled network elements. (1) Equipment is necessary for interconnection if an inability to deploy that... obtains within its own network or the incumbent provides to any affiliate, subsidiary, or other party. (2) Equipment is necessary for access to an unbundled network element if an inability to deploy that equipment...
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... unbundled network elements. (1) Equipment is necessary for interconnection if an inability to deploy that... obtains within its own network or the incumbent provides to any affiliate, subsidiary, or other party. (2) Equipment is necessary for access to an unbundled network element if an inability to deploy that equipment...
ERIC Educational Resources Information Center
Snoder, Per
2017-01-01
This article reports on a classroom-based experiment that tested the effects of three vocabulary teaching constructs (involvement load, spacing, and intentionality) on the learning of English verb-noun collocations--for example, "shelve a plan." Laufer and Hulstijn's (2001) "involvement load" predicts that the higher the…
Strategies in Translating Collocations in Religious Texts from Arabic into English
ERIC Educational Resources Information Center
Dweik, Bader S.; Shakra, Mariam M. Abu
2010-01-01
The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... quarter substitution test. ``Collocated'' indicates that the collocated data was substituted for missing... 24-hour standard design value is greater than the level of the standard. EPA addresses missing data... substituted for the missing data. In the maximum quarter test, maximum recorded values are substituted for the...
Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1994-01-01
This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuators and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.
Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1994-01-01
This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuator and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.
USDA-ARS?s Scientific Manuscript database
If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...
Collocational Competence of Arabic Speaking Learners of English: A Study in Lexical Semantics.
ERIC Educational Resources Information Center
Zughoul, Muhammad Raji; Abdul-Fattah, Hussein S.
This study examined learners' productive competence in collocations and idioms by means of their performance on two interdependent tasks. Participants were two groups of English as a Foreign Language undergraduate and graduate students from the English department at Jordan's Yarmouk University. The two tasks included the following: a multiple…