Science.gov

Sample records for spline collocation method

  1. Schwarz and multilevel methods for quadratic spline collocation

    SciTech Connect

    Christara, C.C.; Smith, B.

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  2. Spline collocation method for linear singular hyperbolic systems

    NASA Astrophysics Data System (ADS)

    Gaidomak, S. V.

    2008-07-01

    Some classes of singular systems of partial differential equations with variable matrix coefficients and internal hyperbolic structure are considered. The spline collocation method is used to numerically solve such systems. Sufficient conditions for the convergence of the numerical procedure are obtained. Numerical results are presented.

  3. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  4. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    SciTech Connect

    Kim, Sang Dong

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  5. Parameter estimation technique for boundary value problems by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1988-01-01

    A parameter-estimation technique for boundary-integral equations of the second kind is developed. The output least-squares identification technique using the spline collocation method is considered. The convergence analysis for the numerical method is discussed. The results are applied to boundary parameter estimations for two-dimensional Laplace and Helmholtz equations.

  6. Orthogonal cubic spline collocation method for the extended Fisher-Kolmogorov equation

    NASA Astrophysics Data System (ADS)

    Danumjaya, P.; Pani, Amiya K.

    2005-02-01

    A second-order splitting combined with orthogonal cubic spline collocation method is formulated and analysed for the extended Fisher-Kolmogorov equation. With the help of Lyapunov functional, a bound in maximum norm is derived for the semidiscrete solution. Optimal error estimates are established for the semidiscrete case. Finally, using the monomial basis functions we present the numerical results in which the integration in time is performed using RADAU 5 software library.

  7. An ADI extrapolated Crank-Nicolson orthogonal spline collocation method for nonlinear reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Fernandes, Ryan I.; Fairweather, Graeme

    2012-08-01

    An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.

  8. Higher Order B-Spline Collocation at the Greville Abscissae

    SciTech Connect

    Johnson, Richard Wayne

    2005-01-01

    Collocation methods are investigated because of their simplicity and inherent efficiency for application to a model problem with similarities to the equations of fluid dynamics. The model problem is a steady, one-dimensional convection-diffusion equation with constant coefficients. The objective of the present research is to compare the efficiency and accuracy of several collocation schemes as applied to the model problem for values of 15 and 50 for the associated Peclet number. The application of standard nodal and orthogonal collocation is compared to the use of the Greville abscissae for the collocation points, in conjunction with cubic and quartic B-splines. The continuity of the B-spline curve solution is varied from C1 continuity for traditional orthogonal collocation of cubic and quartic splines to C2-C3 continuity for cubic and quartic splines employing nodal, orthogonal and Greville point collocation. The application of nodal, one-point orthogonal, and Greville collocation for smoothest quartic B-splines is found to be as accurate as for traditional two-point orthogonal collocation using cubics, while having comparable or better efficiency based on operation count. Greville collocation is more convenient than nodal or 1-point orthogonal collocation because exactly the correct number of collocation points is available.

  9. Cubic Spline Collocation Method for the Simulation of Turbulent Thermal Convection in Compressible Fluids

    SciTech Connect

    Castillo, Victor Manuel

    1999-01-01

    A collocation method using cubic splines is developed and applied to simulate steady and time-dependent, including turbulent, thermally convecting flows for two-dimensional compressible fluids. The state variables and the fluxes of the conserved quantities are approximated by cubic splines in both space direction. This method is shown to be numerically conservative and to have a local truncation error proportional to the fourth power of the grid spacing. A ''dual-staggered'' Cartesian grid, where energy and momentum are updated on one grid and mass density on the other, is used to discretize the flux form of the compressible Navier-Stokes equations. Each grid-line is staggered so that the fluxes, in each direction, are calculated at the grid midpoints. This numerical method is validated by simulating thermally convecting flows, from steady to turbulent, reproducing known results. Once validated, the method is used to investigate many aspects of thermal convection with high numerical accuracy. Simulations demonstrate that multiple steady solutions can coexist at the same Rayleigh number for compressible convection. As a system is driven further from equilibrium, a drop in the time-averaged dimensionless heat flux (and the dimensionless internal entropy production rate) occurs at the transition from laminar-periodic to chaotic flow. This observation is consistent with experiments of real convecting fluids. Near this transition, both harmonic and chaotic solutions may exist for the same Rayleigh number. The chaotic flow loses phase-space information at a greater rate, while the periodic flow transports heat (produces entropy) more effectively. A linear sum of the dimensionless forms of these rates connects the two flow morphologies over the entire range for which they coexist. For simulations of systems with higher Rayleigh numbers, a scaling relation exists relating the dimensionless heat flux to the two-seventh's power of the Rayleigh number, suggesting the

  10. Quartic B-spline collocation method applied to Korteweg de Vries equation

    NASA Astrophysics Data System (ADS)

    Zin, Shazalina Mat; Majid, Ahmad Abd; Ismail, Ahmad Izani Md

    2014-07-01

    The Korteweg de Vries (KdV) equation is known as a mathematical model of shallow water waves. The general form of this equation is ut+ɛuux+μuxxx = 0 where u(x,t) describes the elongation of the wave at displacement x and time t. In this work, one-soliton solution for KdV equation has been obtained numerically using quartic B-spline collocation method for displacement x and using finite difference approach for time t. Two problems have been identified to be solved. Approximate solutions and errors for these two test problems were obtained for different values of t. In order to look into accuracy of the method, L2-norm and L∞-norm have been calculated. Mass, energy and momentum of KdV equation have also been calculated. The results obtained show the present method can approximate the solution very well, but as time increases, L2-norm and L∞-norm are also increase.

  11. Progress on the Development of B-spline Collocation for the Solution of Differential Model Equations: A Novel Algorithm for Adaptive Knot Insertion

    SciTech Connect

    Johnson, Richard Wayne

    2003-05-01

    The application of collocation methods using spline basis functions to solve differential model equations has been in use for a few decades. However, the application of spline collocation to the solution of the nonlinear, coupled, partial differential equations (in primitive variables) that define the motion of fluids has only recently received much attention. The issues that affect the effectiveness and accuracy of B-spline collocation for solving differential equations include which points to use for collocation, what degree B-spline to use and what level of continuity to maintain. Success using higher degree B-spline curves having higher continuity at the knots, as opposed to more traditional approaches using orthogonal collocation, have recently been investigated along with collocation at the Greville points for linear (1D) and rectangular (2D) geometries. The development of automatic knot insertion techniques to provide sufficient accuracy for B-spline collocation has been underway. The present article reviews recent progress for the application of B-spline collocation to fluid motion equations as well as new work in developing a novel adaptive knot insertion algorithm for a 1D convection-diffusion model equation.

  12. Spectral collocation methods

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.

    1987-01-01

    This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.

  13. A Fourth-Order Spline Collocation Approach for the Solution of a Boundary Layer Problem

    NASA Astrophysics Data System (ADS)

    Sayfy, Khoury, S.

    2011-09-01

    A finite element approach, based on cubic B-spline collocation, is presented for the numerical solution of a class of singularly perturbed two-point boundary value problems that possess a boundary layer at one or two end points. Due to the existence of a layer, the problem is handled using an adaptive spline collocation approach constructed over a non-uniform Shishkin-like meshes, defined via a carefully selected generating function. To tackle the case of nonlinearity, if it exists, an iterative scheme arising from Newton's method is employed. The rate of convergence is verified to be of fourth-order and is calculated using the double-mesh principle. The efficiency and applicability of the method are demonstrated by applying it to a number of linear and nonlinear examples. The numerical solutions are compared with both analytical and other existing numerical solutions in the literature. The numerical results confirm that this method is superior when contrasted with other accessible approaches and yields more accurate solutions.

  14. A B-Spline Method for Solving the Navier Stokes Equations

    SciTech Connect

    Johnson, Richard Wayne

    2005-01-01

    Collocation methods using piece-wise polynomials, including B-splines, have been developed to find approximate solutions to both ordinary and partial differential equations. Such methods are elegant in their simplicity and efficient in their application. The spline collocation method is typically more efficient than traditional Galerkin finite element methods, which are used to solve the equations of fluid dynamics. The collocation method avoids integration. Exact formulae are available to find derivatives on spline curves and surfaces. The primary objective of the present work is to determine the requirements for the successful application of B-spline collocation to solve the coupled, steady, 2D, incompressible Navier–Stokes and continuity equations for laminar flow. The successful application of B-spline collocation included the development of ad hoc method dubbed the Boundary Residual method to deal with the presence of the pressure terms in the Navier–Stokes equations. Historically, other ad hoc methods have been developed to solve the incompressible Navier–Stokes equations, including the artificial compressibility, pressure correction and penalty methods. Convergence studies show that the ad hoc Boundary Residual method is convergent toward an exact (manufactured) solution for the 2D, steady, incompressible Navier–Stokes and continuity equations. C1 cubic and quartic B-spline schemes employing orthogonal collocation and C2 cubic and C3 quartic B-spline schemes with collocation at the Greville points are investigated. The C3 quartic Greville scheme is shown to be the most efficient scheme for a given accuracy, even though the C1 quartic orthogonal scheme is the most accurate for a given partition. Two solution approaches are employed, including a globally-convergent zero-finding Newton's method using an LU decomposition direct solver and the variable-metric minimization method using BFGS update.

  15. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  16. A multilevel stochastic collocation method for SPDEs

    SciTech Connect

    Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton

    2015-03-10

    We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.

  17. Numerical Methods Using B-Splines

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Merriam, Marshal (Technical Monitor)

    1997-01-01

    The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.

  18. Optimization of dynamic systems using collocation methods

    NASA Astrophysics Data System (ADS)

    Holden, Michael Eric

    The time-based simulation is an important tool for the engineer. Often a time-domain simulation is the most expedient to construct, the most capable of handling complex modeling issues, or the most understandable with an engineer's physical intuition. Aeroelastic systems, for example, are often most easily solved with a nonlinear time-based approach to allow the use of high fidelity models. Simulations of automatic flight control systems can also be easier to model in the time domain, especially when nonlinearities are present. Collocation is an optimization method for systems that incorporate a time-domain simulation. Instead of integrating the equations of motion for each design iteration, the optimizer iteratively solves the simulation as it finds the optimal design. This forms a smooth, well-posed, sparse optimization problem, transforming the numerical integration's sequential calculation into a set of constraints that can be evaluated in any order, or even in parallel. The collocation method used in this thesis has been improved from existing techniques in several ways, in particular with a very simple and computationally inexpensive method of applying dynamic constraints, such as damping, that are more traditionally calculated with linear models in the frequency domain. This thesis applies the collocation method to a range of aircraft design problems, from minimizing the weight of a wing with a flutter constraint, to gain-scheduling the stability augmentation system of a small-scale flight control testbed, to aeroservoelastic design of a large aircraft concept. Collocation methods have not been applied to aeroelastic simulations in the past, although the combination of nonlinear aerodynamic analyses with structural dynamics and stability constraints is well-suited to collocation. The results prove the collocation method's worth as a tool for aircraft design, particularly when applied to the multidisciplinary numerical models used today.

  19. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  20. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  1. Stochastic Collocation Method for Three-dimensional Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Shi, L.; Zhang, D.

    2008-12-01

    The stochastic collocation method (SCM) has recently gained extensive attention in several disciplines. The numerical implementation of SCM only requires repetitive runs of an existing deterministic solver or code as in the Monte Carlo simulation. But it is generally much more efficient than the Monte Carlo method. In this paper, the stochastic collocation method is used to efficiently qualify uncertainty of three-dimensional groundwater flow. We introduce the basic principles of common collocation methods, i.e., the tensor product collocation method (TPCM), Smolyak collocation method (SmCM), Stround-2 collocation method (StCM), and probability collocation method (PCM). Their accuracy, computational cost, and limitation are discussed. Illustrative examples reveal that the seamless combination of collocation techniques and existing simulators makes the new framework possible to efficiently handle complex stochastic problems.

  2. A B-spline method used to calculate added resistance in waves

    NASA Astrophysics Data System (ADS)

    Zangeneh, Razieh; Ghiasi, Mahmood

    2017-03-01

    Making an exact computation of added resistance in sea waves is of high interest due to the economic effects relating to ship design and operation. In this paper, a B-spline based method is developed for computation of added resistance. Based on the potential flow assumption, the velocity potential is computed using Green's formula. The Kochin function is applied to compute added resistance using Maruo's far-field method, the body surface is described by a B-spline curve and potentials and normal derivation of potentials are also described by B-spline basis functions and B-spline derivations. A collocation approach is applied for numerical computation, and integral equations are then evaluated by applying Gauss-Legendre quadrature. Computations are performed for a spheroid and different hull forms; results are validated by a comparison with experimental results. All results obtained with the present method show good agreement with experimental results.

  3. A B-spline method used to calculate added resistance in waves

    NASA Astrophysics Data System (ADS)

    Zangeneh, Razieh; Ghiasi, Mahmood

    2017-01-01

    Making an exact computation of added resistance in sea waves is of high interest due to the economic effects relating to ship design and operation. In this paper, a B-spline based method is developed for computation of added resistance. Based on the potential flow assumption, the velocity potential is computed using Green's formula. The Kochin function is applied to compute added resistance using Maruo's far-field method, the body surface is described by a B-spline curve and potentials and normal derivation of potentials are also described by B-spline basis functions and B-spline derivations. A collocation approach is applied for numerical computation, and integral equations are then evaluated by applying Gauss-Legendre quadrature. Computations are performed for a spheroid and different hull forms; results are validated by a comparison with experimental results. All results obtained with the present method show good agreement with experimental results.

  4. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  5. Parallel adaptive wavelet collocation method for PDEs

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.

  6. Numerical method using cubic B-spline for a strongly coupled reaction-diffusion system.

    PubMed

    Abbas, Muhammad; Majid, Ahmad Abd; Md Ismail, Ahmad Izani; Rashid, Abdur

    2014-01-01

    In this paper, a numerical method for the solution of a strongly coupled reaction-diffusion system, with suitable initial and Neumann boundary conditions, by using cubic B-spline collocation scheme on a uniform grid is presented. The scheme is based on the usual finite difference scheme to discretize the time derivative while cubic B-spline is used as an interpolation function in the space dimension. The scheme is shown to be unconditionally stable using the von Neumann method. The accuracy of the proposed scheme is demonstrated by applying it on a test problem. The performance of this scheme is shown by computing L∞ and L2 error norms for different time levels. The numerical results are found to be in good agreement with known exact solutions.

  7. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  8. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  9. Pseudospectral collocation methods for fourth order differential equations

    NASA Technical Reports Server (NTRS)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  10. Simplex-stochastic collocation method with improved scalability

    SciTech Connect

    Edeling, W.N.; Dwight, R.P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  11. Higher-order numerical solutions using cubic splines

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.

  12. The chain collocation method: A spectrally accurate calculus of forms

    NASA Astrophysics Data System (ADS)

    Rufat, Dzhelil; Mason, Gemma; Mullen, Patrick; Desbrun, Mathieu

    2014-01-01

    Preserving in the discrete realm the underlying geometric, topological, and algebraic structures at stake in partial differential equations has proven to be a fruitful guiding principle for numerical methods in a variety of fields such as elasticity, electromagnetism, or fluid mechanics. However, structure-preserving methods have traditionally used spaces of piecewise polynomial basis functions for differential forms. Yet, in many problems where solutions are smoothly varying in space, a spectral numerical treatment is called for. In an effort to provide structure-preserving numerical tools with spectral accuracy on logically rectangular grids over periodic or bounded domains, we present a spectral extension of the discrete exterior calculus (DEC), with resulting computational tools extending well-known collocation-based spectral methods. Its efficient implementation using fast Fourier transforms is provided as well.

  13. A B-spline Galerkin method for the Dirac equation

    NASA Astrophysics Data System (ADS)

    Froese Fischer, Charlotte; Zatsarinny, Oleg

    2009-06-01

    The B-spline Galerkin method is first investigated for the simple eigenvalue problem, y=-λy, that can also be written as a pair of first-order equations y=λz, z=-λy. Expanding both y(r) and z(r) in the B basis results in many spurious solutions such as those observed for the Dirac equation. However, when y(r) is expanded in the B basis and z(r) in the dB/dr basis, solutions of the well-behaved second-order differential equation are obtained. From this analysis, we propose a stable method ( B,B) basis for the Dirac equation and evaluate its accuracy by comparing the computed and exact R-matrix for a wide range of nuclear charges Z and angular quantum numbers κ. When splines of the same order are used, many spurious solutions are found whereas none are found for splines of different order. Excellent agreement is obtained for the R-matrix and energies for bound states for low values of Z. For high Z, accuracy requires the use of a grid with many points near the nucleus. We demonstrate the accuracy of the bound-state wavefunctions by comparing integrals arising in hyperfine interaction matrix elements with exact analytic expressions. We also show that the Thomas-Reiche-Kuhn sum rule is not a good measure of the quality of the solutions obtained by the B-spline Galerkin method whereas the R-matrix is very sensitive to the appearance of pseudo-states.

  14. Multi-element probabilistic collocation method in high dimensions

    SciTech Connect

    Foo, Jasmine; Karniadakis, George Em

    2010-03-01

    We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order {mu} and the effective dimension {nu}, with {nu}<=}{mu} for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

  15. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  16. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  17. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  18. A multidomain spectral collocation method for the Stokes problem

    NASA Technical Reports Server (NTRS)

    Landriani, G. Sacchi; Vandeven, H.

    1989-01-01

    A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.

  19. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  20. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  1. Runge-Kutta collocation methods for differential-algebraic equations of indices 2 and 3

    NASA Astrophysics Data System (ADS)

    Skvortsov, L. M.

    2012-10-01

    Stiffly accurate Runge-Kutta collocation methods with explicit first stage are examined. The parameters of these methods are chosen so as to minimize the errors in the solutions to differential-algebraic equations of indices 2 and 3. This construction results in methods for solving such equations that are superior to the available Runge-Kutta methods.

  2. Fast Spectral Collocation Method for Surface Integral Equations of Potential Problems in a Spheroid

    PubMed Central

    Xu, Zhenli; Cai, Wei

    2009-01-01

    This paper proposes a new technique to speed up the computation of the matrix of spectral collocation discretizations of surface single and double layer operators over a spheroid. The layer densities are approximated by a spectral expansion of spherical harmonics and the spectral collocation method is then used to solve surface integral equations of potential problems in a spheroid. With the proposed technique, the computation cost of collocation matrix entries is reduced from 𝒪(M2N4) to 𝒪(MN4), where N2 is the number of spherical harmonics (i.e., size of the matrix) and M is the number of one-dimensional integration quadrature points. Numerical results demonstrate the spectral accuracy of the method. PMID:20414359

  3. Shifted Jacobi spectral collocation method for solving two-sided fractional water wave models

    NASA Astrophysics Data System (ADS)

    Abdelkawy, M. A.; Alqahtani, Rubayyi T.

    2017-01-01

    This paper presents the spectral collocation technique to solve the two-sided fractional water wave models (TSF-WWMs). The shifted Jacobi-Gauss-Lobatto collocation (SJ-GL-C) and shifted Jacobi-Gauss-Radau collocation (SJ-GR-C) methods are developed to approximate the TSF-WWMs. The main idea in the novel algorithm is to reduce the TSF-WWM to a systems of algebraic equations. The applicability and accuracy of the present technique have been examined by the given numerical examples in this paper. By means of these numerical examples, we ensure that the present technique is a simple and very accurate numerical scheme for solving TSF-WWMs.

  4. A spectral collocation method for a rotating Bose-Einstein condensation in optical lattices

    NASA Astrophysics Data System (ADS)

    Li, Z.-C.; Chen, S.-Y.; Chien, C.-S.; Chen, H.-S.

    2011-06-01

    We extend the study of spectral collocation methods (SCM) in Li et al. (2009) [1] for semilinear elliptic eigenvalue problems to that for a rotating Bose-Einstein condensation (BEC) and a rotating BEC in optical lattices. We apply the Lagrange interpolants using the Legendre-Gauss-Lobatto points to derive error bounds for the SCM. The optimal error bounds are derived for both H-norm and L-norm. Extensive numerical experiments on a rotating Bose-Einstein condensation and a rotating BEC in optical lattices are reported. Our numerical results show that the convergence rate of the SCM is exponential, and is independent of the collocation points we choose.

  5. Bicubic B-spline interpolation method for two-dimensional Laplace's equations

    NASA Astrophysics Data System (ADS)

    Abd Hamid, Nur Nadiah; Majid, Ahmad Abd.; Ismail, Ahmad Izani Md.

    2013-04-01

    Two-dimensional Laplace's equation is solved using bicubic B-spline interpolation method. An arbitrary surface with some unknown coefficients is generated using bicubic B-spline surface's formula. This surface is presumed to be the solution for the equation. The values of the coefficients are calculated by spline interpolation technique using the corresponding differential equations and boundary conditions. This method produces approximated analytical solution for the equation. A numerical example will be presented along with a comparison of the results with finite element and isogeometrical methods.

  6. NOKIN1D: one-dimensional neutron kinetics based on a nodal collocation method

    NASA Astrophysics Data System (ADS)

    Verdú, G.; Ginestar, D.; Miró, R.; Jambrina, A.; Barrachina, T.; Soler, Amparo; Concejal, Alberto

    2014-06-01

    The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method.

  7. Bicubic B-spline interpolation method for two-dimensional heat equation

    NASA Astrophysics Data System (ADS)

    Hamid, Nur Nadiah Abd.; Majid, Ahmad Abd.; Ismail, Ahmad Izani Md.

    2015-10-01

    Two-dimensional heat equation was solved using bicubic B-spline interpolation method. An arbitrary surface equation was generated by bicubic B-spline equation. This equation was incorporated in the heat equation after discretizing the time using finite difference method. An under-determined system of linear equation was obtained and solved to obtain the approximate analytical solution for the problem. This method was tested on one example.

  8. On numerical methods for Hamiltonian PDEs and a collocation method for the Vlasov-Maxwell equations

    SciTech Connect

    Holloway, J.P.

    1996-11-01

    Hamiltonian partial differential equations often have implicit conservation laws-constants of the motion-embedded within them. It is not, in general, possible to preserve these conservation laws simply by discretization in conservative form because there is frequently only one explicit conservation law. However, by using weighted residual methods and exploiting the Hamiltonian structure of the equations it is shown that at least some of the conservation laws are preserved in a method of lines (continuous in time). In particular, the Hamiltonian can always be exactly preserved as a constant of the motion. Other conservation laws, in particular linear and quadratic Casimirs and momenta, can sometimes be conserved too, depending on the details of the equations under consideration and the form of discretization employed. Collocation methods also offer automatic conservation of linear and quadratic Casimirs. Some standard discretization methods, when applied to Hamiltonian problems are shown to be derived from a numerical approximation to the exact Poisson bracket of the system. A method for the Vlasov-Maxwell equations based on Legendre-Gauss-Lobatto collocation is presented as an example of these ideas. 22 refs.

  9. Improved collocation methods with application to six-degree-of-freedom trajectory optimization

    NASA Astrophysics Data System (ADS)

    Desai, Prasun N.

    2005-11-01

    An improved collocation method is developed for a class of problems that is intractable, or nearly so, by conventional collocation. These are problems in which there are two distinct timescales of the system states, that is, where a subset of the states have high-frequency variations while the remaining states vary comparatively slowly. In conventional collocation, the timescale for the discretization would be set by the need to capture the high-frequency dynamics. The problem then becomes very large and the solution of the corresponding nonlinear programming problem becomes geometrically more time consuming and difficult. A new two-timescale discretization method is developed for the solution of such problems using collocation. This improved collocation method allows the use of a larger time discretization for the low-frequency dynamics of the motion, and a second finer time discretization scheme for the higher-frequency dynamics of the motion. The accuracy of the new method is demonstrated first on an example problem, an optimal lunar ascent. The method is then applied to the type of challenging problem for which it is designed, the optimization of the approach to landing trajectory for a winged vehicle returning from space, the HL-20 lifting body vehicle. The converged solution shows a realistic landing profile and fully captures the higher-frequency rotational dynamics. A source code using the sparse optimizer SNOPT is developed for the use of this method which generates constraint equations, gradients, and the system Jacobian for problems of arbitrary size. This code constitutes a much-improved tool for aerospace vehicle design but has application to all two-timescale optimization problems.

  10. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  11. The rational Chebyshev of second kind collocation method for solving a class of astrophysics problems

    NASA Astrophysics Data System (ADS)

    Parand, K.; Khaleqi, S.

    2016-02-01

    The Lane-Emden equation has been used to model several phenomena in theoretical physics, mathematical physics and astrophysics such as the theory of stellar structure. This study is an attempt to utilize the collocation method with the rational Chebyshev function of Second kind (RCS) to solve the Lane-Emden equation over the semi-infinite interval [0,+∞[ . According to well-known results and comparing with previous methods, it can be said that this method is efficient and applicable.

  12. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  13. Numerical approximation of Lévy-Feller fractional diffusion equation via Chebyshev-Legendre collocation method

    NASA Astrophysics Data System (ADS)

    Sweilam, N. H.; Abou Hasan, M. M.

    2016-08-01

    This paper reports a new spectral algorithm for obtaining an approximate solution for the Lévy-Feller diffusion equation depending on Legendre polynomials and Chebyshev collocation points. The Lévy-Feller diffusion equation is obtained from the standard diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative. A new formula expressing explicitly any fractional-order derivatives, in the sense of Riesz-Feller operator, of Legendre polynomials of any degree in terms of Jacobi polynomials is proved. Moreover, the Chebyshev-Legendre collocation method together with the implicit Euler method are used to reduce these types of differential equations to a system of algebraic equations which can be solved numerically. Numerical results with comparisons are given to confirm the reliability of the proposed method for the Lévy-Feller diffusion equation.

  14. Regional Ionosphere Mapping with Kriging and B-spline Methods

    NASA Astrophysics Data System (ADS)

    Grynyshyna-Poliuga, O.; Stanislawska, I. M.

    2013-12-01

    This work demonstrates the concept and practical examples of mapping of regional ionosphere, based on GPS observations from the EGNOS Ranging and Integrity Monitoring Stations (RIMS) network and permanent stations near to them. Interpolation/prediction techniques, such as kriging (KR) and the cubic B-spline, which are suitable for handling multi-scale phenomena and unevenly distributed data, were used to create total electron content (TEC) maps. Their computational efficiency (especially the B-spline) and the ability to handle undersampled data (especially kriging) are particularly attractive. The data sets have been collect into seasonal bins representing June, December solstices and equinox (March, September). TEC maps have a spatial resolution of 2.50 and 2.50 in latitude and longitude, respectively, and a 15-minutes temporal resolution. The time series of the TEC maps can be used to derive average monthly maps describing major ionospheric trends as a function of time, season, and spatial location.

  15. A novel stochastic collocation method for uncertainty propagation in complex mechanical systems

    NASA Astrophysics Data System (ADS)

    Qi, WuChao; Tian, SuMei; Qiu, ZhiPing

    2015-02-01

    This paper presents a novel stochastic collocation method based on the equivalent weak form of multivariate function integral to quantify and manage uncertainties in complex mechanical systems. The proposed method, which combines the advantages of the response surface method and the traditional stochastic collocation method, only sets integral points at the guide lines of the response surface. The statistics, in an engineering problem with many uncertain parameters, are then transformed into a linear combination of simple functions' statistics. Furthermore, the issue of determining a simple method to solve the weight-factor sets is discussed in detail. The weight-factor sets of two commonly used probabilistic distribution types are given in table form. Studies on the computational accuracy and efforts show that a good balance in computer capacity is achieved at present. It should be noted that it's a non-gradient and non-intrusive algorithm with strong portability. For the sake of validating the procedure, three numerical examples concerning a mathematical function with analytical expression, structural design of a straight wing, and flutter analysis of a composite wing are used to show the effectiveness of the guided stochastic collocation method.

  16. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  17. Adaptive-Anisotropic Wavelet Collocation Method on general curvilinear coordinate systems

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2017-03-01

    A new general framework for an Adaptive-Anisotropic Wavelet Collocation Method (A-AWCM) for the solution of partial differential equations is developed. This proposed framework addresses two major shortcomings of existing wavelet-based adaptive numerical methodologies, namely the reliance on a rectangular domain and the "curse of anisotropy", i.e. drastic over-resolution of sheet- and filament-like features arising from the inability of the wavelet refinement mechanism to distinguish highly correlated directional information in the solution. The A-AWCM addresses both of these challenges by incorporating coordinate transforms into the Adaptive Wavelet Collocation Method for the solution of PDEs. The resulting integrated framework leverages the advantages of both the curvilinear anisotropic meshes and wavelet-based adaptive refinement in a complimentary fashion, resulting in greatly reduced cost of resolution for anisotropic features. The proposed Adaptive-Anisotropic Wavelet Collocation Method retains the a priori error control of the solution and fully automated mesh refinement, while offering new abilities through the flexible mesh geometry, including body-fitting. The new A-AWCM is demonstrated for a variety of cases, including parabolic diffusion, acoustic scattering, and unsteady external flow.

  18. Feasibility of using Hybrid Wavelet Collocation - Brinkman Penalization Method for Shape and Topology Optimization

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Gazzola, Mattia; Koumoutsakos, Petros

    2009-11-01

    In this talk we discuss preliminary results for the use of hybrid wavelet collocation - Brinkman penalization approach for shape and topology optimization of fluid flows. Adaptive wavelet collocation method tackles the problem of efficiently resolving a fluid flow on a dynamically adaptive computational grid in complex geometries (where grid resolution varies both in space and time time), while Brinkman volume penalization allows easy variation of flow geometry without using body-fitted meshes by simply changing the shape of the penalization region. The use of Brinkman volume penalization approach allow seamless transition from shape to topology optimization by combining it with level set approach and increasing the size of the optimization space. The approach is demonstrated for shape optimization of a variety of fluid flows by optimizing single cost function (time averaged Drag coefficient) using covariance matrix adaptation (CMA) evolutionary algorithm.

  19. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    SciTech Connect

    Avila, Gustavo Carrington, Tucker

    2015-12-07

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.

  20. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  1. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting

    PubMed Central

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications. PMID:28319131

  2. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  3. Higher-order numerical solutions using cubic splines. [for partial differential equations

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    A cubic spline collocation procedure has recently been developed for the numerical solution of partial differential equations. In the present paper, this spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a non-uniform mesh and overall fourth-order accuracy for a uniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, will be presented for several model problems.-

  4. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  5. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  6. A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation

    NASA Technical Reports Server (NTRS)

    Jones, Brandon A.; Anderson, Rodney L.

    2012-01-01

    Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.

  7. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    SciTech Connect

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-11-20

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.

  8. Some Optimal Runge-Kutta Collocation Methods for Stiff Problems and DAEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernández-Abreu, D.; Montijano, J. I.

    2008-09-01

    A new family of implicit Runge-Kutta methods was introduced at ICCAM 2008 (Gent) by the present authors. This family of methods is intended to solve numerically stiff problems and DAEs. The s-stage method (for s⩾3) has the following features: it is a collocation method depending on a real free parameter β, has classical convergence order 2s-3 and is strongly A-stable for β ranging in some nonempty open interval Is = (-γs,0). In addition, for β∈Is, all the collocation nodes fall in the interval [0,1]. Moreover, these methods also involve a similar computational cost as that of the corresponding counterpart in the Runge-Kutta Radau IIA family (the method having the same classical order) when solving for their stage values. However, our methods have the additional advantage of possessing a higher stage order than the respective Radau IIA counterparts. This circumstance is important when integrating stiff problems in which case most of numerical methods are affected by an order reduction. In this talk we discuss how to optimize the free parameter depending on the special features of the kind of stiff problems and DAEs to be solved. This point is highly important in order to make competitive our methods when compared with those of the Radau IIA family.

  9. Legendre spectral-collocation method for solving some types of fractional optimal control problems.

    PubMed

    Sweilam, Nasser H; Al-Ajami, Tamer M

    2015-05-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh-Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques.

  10. Legendre spectral-collocation method for solving some types of fractional optimal control problems

    PubMed Central

    Sweilam, Nasser H.; Al-Ajami, Tamer M.

    2014-01-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937

  11. Numerical Algorithm Based on Haar-Sinc Collocation Method for Solving the Hyperbolic PDEs

    PubMed Central

    Javadi, H. H. S.; Navidi, H. R.

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  12. Cubic Trigonometric B-spline Galerkin Methods for the Regularized Long Wave Equation

    NASA Astrophysics Data System (ADS)

    Irk, Dursun; Keskin, Pinar

    2016-10-01

    A numerical solution of the Regularized Long Wave (RLW) equation is obtained using Galerkin finite element method, based on Crank Nicolson method for the time integration and cubic trigonometric B-spline functions for the space integration. After two different linearization techniques are applied, the proposed algorithms are tested on the problems of propagation of a solitary wave and interaction of two solitary waves.

  13. Wave Propagation by Way of Exponential B-Spline Galerkin Method

    NASA Astrophysics Data System (ADS)

    Zorsahin Gorgulu, M.; Dag, I.; Irk, D.

    2016-10-01

    In this paper, the exponential B-spline Galerkin method is set up for getting the numerical solution of the Burgers’ equation. Two numerical examples related to shock wave propagation and travelling wave are studied to illustrate the accuracy and the efficiency of the method. Obtained results are compared with some early studies.

  14. Sinc-Chebyshev Collocation Method for a Class of Fractional Diffusion-Wave Equations

    PubMed Central

    Mao, Zhi; Xiao, Aiguo; Yu, Zuguo; Shi, Long

    2014-01-01

    This paper is devoted to investigating the numerical solution for a class of fractional diffusion-wave equations with a variable coefficient where the fractional derivatives are described in the Caputo sense. The approach is based on the collocation technique where the shifted Chebyshev polynomials in time and the sinc functions in space are utilized, respectively. The problem is reduced to the solution of a system of linear algebraic equations. Through the numerical example, the procedure is tested and the efficiency of the proposed method is confirmed. PMID:24977177

  15. Spline energy method and its application in the structural analysis of antenna reflector

    NASA Astrophysics Data System (ADS)

    Wang, Deman; Wu, Qinbao

    A method is proposed for analyzing combined structures consisting of shell and beam (rib) members. The cubic B spline function is used to interpolate the displacements and the total potential energy of the shell and the ribs. The equilibrium simultaneous equations can be obtained according to the principle of minimum potential energy.

  16. Mixed meshless local Petrov-Galerkin collocation method for modeling of material discontinuity

    NASA Astrophysics Data System (ADS)

    Jalušić, Boris; Sorić, Jurica; Jarak, Tomislav

    2017-01-01

    A mixed meshless local Petrov-Galerkin (MLPG) collocation method is proposed for solving the two-dimensional boundary value problem of heterogeneous structures. The heterogeneous structures are defined by partitioning the total material domain into subdomains with different linear-elastic isotropic properties which define homogeneous materials. The discretization and approximation of unknown field variables is done for each homogeneous material independently, therein the interface of the homogeneous materials is discretized with overlapping nodes. For the approximation, the moving least square method with the imposed interpolation condition is utilized. The solution for the entire heterogeneous structure is obtained by enforcing displacement continuity and traction reciprocity conditions at the nodes representing the interface boundary. The accuracy and numerical efficiency of the proposed mixed MLPG collocation method is demonstrated by numerical examples. The obtained results are compared with a standard fully displacement (primal) meshless approach as well as with available analytical and numerical solutions. Excellent agreement of the solutions is exhibited and a more robust, superior and stable modeling of material discontinuity is achieved using the mixed method.

  17. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  18. Collocations in Language Learning: Corpus-Based Automatic Compilation of Collocations and Bilingual Collocation Concordancer.

    ERIC Educational Resources Information Center

    Kita, Kenji; Ogata, Hiroaki

    1997-01-01

    Presents an efficient method for extracting collocations from corpora, which uses the cost criteria measure and a tree-based data structure. Proposes a bilingual collocation concordancer, a tool that provides language learners with collocation correspondences between a native and foreign language. (Eight references) (Author/CK)

  19. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  20. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  1. Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method

    NASA Astrophysics Data System (ADS)

    Habel, Branislav; Janak, Juraj

    2014-05-01

    A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.

  2. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  3. A Haar wavelet collocation method for coupled nonlinear Schrödinger-KdV equations

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer; Esen, Alaattin; Bulut, Fatih

    2016-04-01

    In this paper, to obtain accurate numerical solutions of coupled nonlinear Schrödinger-Korteweg-de Vries (KdV) equations a Haar wavelet collocation method is proposed. An explicit time stepping scheme is used for discretization of time derivatives and nonlinear terms that appeared in the equations are linearized by a linearization technique and space derivatives are discretized by Haar wavelets. In order to test the accuracy and reliability of the proposed method L2, L∞ error norms and conserved quantities are used. Also obtained results are compared with previous ones obtained by finite element method, Crank-Nicolson method and radial basis function meshless methods. Error analysis of Haar wavelets is also given.

  4. A Chebyshev Collocation Method for Moving Boundaries, Heat Transfer, and Convection During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.

    1994-01-01

    Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.

  5. A fast collocation method for a variable-coefficient nonlocal diffusion model

    NASA Astrophysics Data System (ADS)

    Wang, Che; Wang, Hong

    2017-02-01

    We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog ⁡ N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.

  6. A collocation method for high-frequency scattering by convex polygons

    NASA Astrophysics Data System (ADS)

    Arden, S.; Chandler-Wilde, S. N.; Langdon, S.

    2007-07-01

    We consider the problem of scattering of a time-harmonic acoustic incident plane wave by a sound soft convex polygon. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the computational cost required to achieve a prescribed level of accuracy grows linearly with respect to the frequency of the incident wave. Recently Chandler-Wilde and Langdon proposed a novel Galerkin boundary element method for this problem for which, by incorporating the products of plane wave basis functions with piecewise polynomials supported on a graded mesh into the approximation space, they were able to demonstrate that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency. Here we propose a related collocation method, using the same approximation space, for which we demonstrate via numerical experiments a convergence rate identical to that achieved with the Galerkin scheme, but with a substantially reduced computational cost.

  7. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-01-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  8. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-11-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  9. Numerical solution of the Rosenau-KdV-RLW equation by using RBFs collocation method

    NASA Astrophysics Data System (ADS)

    Korkmaz, Bahar; Dereli, Yilmaz

    2016-04-01

    In this study, a meshfree method based on the collocation with radial basis functions (RBFs) is proposed to solve numerically an initial-boundary value problem of Rosenau-KdV-regularized long-wave (RLW) equation. Numerical values of invariants of the motion are computed to examine the fundamental conservative properties of the equation. Computational experiments for the simulation of solitary waves examine the accuracy of the scheme in terms of error norms L2 and L∞. Linear stability analysis is investigated to determine whether the present method is stable or unstable. The scheme gives unconditionally stable, and second-order convergent. The obtained results are compared with analytical solution and some other earlier works in the literature. The presented results indicate the accuracy and efficiency of the method.

  10. Numerical analysis of heat conduction problems on irregular domains by means of a collocation meshless method

    NASA Astrophysics Data System (ADS)

    Zamolo, R.; Nobile, E.

    2017-01-01

    A Least Squares Collocation Meshless Method based on Radial Basis Function (RBF) interpolation is used to solve steady state heat conduction problems on 2D polygonal domains using MATLAB® environment. The point distribution process required by the numerical method can be fully automated, taking account of boundary conditions and geometry of the problem to get higher point distribution density where needed. Several convergence tests have been carried out comparing the numerical results to the corresponding analytical solutions to outline the properties of this numerical approach, considering various combinations of parameters. These tests showed favorable convergence properties in the simple cases considered: along with the geometry flexibility, these features confirm that this peculiar numerical approach can be an effective tool in the numerical simulation of heat conduction problems.

  11. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  12. Collocation methods for index 1 DAEs with a singularity of the first kind

    NASA Astrophysics Data System (ADS)

    Koch, Othmar; Maerz, Roswitha; Praetorius, Dirk; Weinmueller, Ewa

    2010-01-01

    We study the convergence behavior of collocation schemes applied to approximate solutions of BVPs in linear index 1 DAEs which exhibit a critical point at the left boundary. Such a critical point of the DAE causes a singularity within the inherent ODE system. We focus our attention on the case when the inherent ODE system is singular with a singularity of the first kind, apply polynomial collocation to the original DAE system and consider different choices of the collocation points such as equidistant, Gaussian or Radau points. We show that for a well-posed boundary value problem for DAEs having a sufficiently smooth solution, the global error of the collocation scheme converges with the order O(h^s) , where s is the number of collocation points. Superconvergence cannot be expected in general due to the singularity, not even for the differential components of the solution. The theoretical results are illustrated by numerical experiments.

  13. A study of the radiative transfer equation using a spherical harmonics-nodal collocation method

    NASA Astrophysics Data System (ADS)

    Capilla, M. T.; Talavera, C. F.; Ginestar, D.; Verdú, G.

    2017-03-01

    Optical tomography has found many medical applications that need to know how the photons interact with the different tissues. The majority of the photon transport simulations are done using the diffusion approximation, but this approximation has a limited validity when optical properties of the different tissues present large gradients, when structures near the photons source are studied or when anisotropic scattering has to be taken into account. As an alternative to the diffusion model, the PL equations for the radiative transfer problem are studied. These equations are discretized in a rectangular mesh using a nodal collocation method. The performance of this model is studied by solving different 1D and 2D benchmark problems of light propagation in tissue having media with isotropic and anisotropic scattering.

  14. Spline-based semiparametric projected generalized estimating equation method for panel count data.

    PubMed

    Hua, Lei; Zhang, Ying

    2012-07-01

    We propose to analyze panel count data using a spline-based semiparametric projected generalized estimating equation (GEE) method with the proportional mean model E(N(t)|Z) = Λ(0)(t) e(β(0)(T)Z). The natural logarithm of the baseline mean function, logΛ(0)(t), is approximated by a monotone cubic B-spline function. The estimates of regression parameters and spline coefficients are obtained by projecting the GEE estimates into the feasible domain using a weighted isotonic regression (IR). The proposed method avoids assuming any parametric structure of the baseline mean function or any stochastic model for the underlying counting process. Selection of the working covariance matrix that accounts for overdispersion improves the estimation efficiency and leads to less biased variance estimations. Simulation studies are conducted using different working covariance matrices in the GEE to investigate finite sample performance of the proposed method, to compare the estimation efficiency, and to explore the performance of different variance estimates in presence of overdispersion. Finally, the proposed method is applied to a real data set from a bladder tumor clinical trial.

  15. A higher order non-polynomial spline method for fractional sub-diffusion problems

    NASA Astrophysics Data System (ADS)

    Li, Xuhao; Wong, Patricia J. Y.

    2017-01-01

    In this paper we shall develop a numerical scheme for a fractional sub-diffusion problem using parametric quintic spline. The solvability, convergence and stability of the scheme will be established and it is shown that the convergence order is higher than some earlier work done. We also present some numerical examples to illustrate the efficiency of the numerical scheme as well as to compare with other methods.

  16. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  17. [Baseline correction method for Raman spectroscopy based on B-spline fitting].

    PubMed

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wu, Jing-lin; Liang, Jun; Zuo, Yong

    2014-08-01

    Baseline drift is a widespread phenomenon in modern spectroscopy instrumentation, which would bring a very negative impact to the feature extraction of spectrum signal, and the baseline correction method is an important means to solve the problem, which is also the important part of Raman signal preprocessing. The general principle of baseline drift elimination is using the fitting method to the fit the baseline. The traditional fitting method is polynomial fitting, but this method is prone to over-fitting and under-fitting, and the fitting order is difficult to be determined. In this paper, the traditional method is improved; the B-spline fitting method is used to approach the baseline of Raman signal through constant iteration The advantages of B-spline, namely low-order and smoothness, can help the method overcome the shortcomings of polynomial method. In the experiments, the Raman signal of malachite green and rhodamine B were detected, and then the proposed method and traditional method were applied to perform baseline correction Experimental results showed that the proposed method can eliminate the Raman signal baseline drift effectively without over- and under-fitting, and the same order can be used in both positions where large or small baseline drift occurred. Therefore, the proposed method provided more accurate and reliable information for the further analysis of spectral data.

  18. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  19. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  20. An Efficient Data-worth Analysis Framework via Probabilistic Collocation Method Based Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Xue, L.; Dai, C.; Zhang, D.; Guadagnini, A.

    2015-12-01

    It is critical to predict contaminant plume in an aquifer under uncertainty, which can help assess environmental risk and design rational management strategies. An accurate prediction of contaminant plume requires the collection of data to help characterize the system. Due to the limitation of financial resources, ones should estimate the expectative value of data collected from each optional monitoring scheme before carried out. Data-worth analysis is believed to be an effective approach to identify the value of the data in some problems, which quantifies the uncertainty reduction assuming that the plausible data has been collected. However, it is difficult to apply the data-worth analysis to a dynamic simulation of contaminant transportation model owning to its requirement of large number of inverse-modeling. In this study, a novel efficient data-worth analysis framework is proposed by developing the Probabilistic Collocation Method based Ensemble Kalman Filter (PCKF). The PCKF constructs polynomial chaos expansion surrogate model to replace the original complex numerical model. Consequently, the inverse modeling can perform on the proxy rather than the original model. An illustrative example, considering the dynamic change of the contaminant concentration, is employed to demonstrate the proposed approach. The Results reveal that schemes with different sampling frequencies, monitoring networks location, prior data content will have significant impact on the uncertainty reduction of the estimation of contaminant plume. Our proposition is validated to provide the reasonable value of data from various schemes.

  1. A self-consistent estimate for linear viscoelastic polycrystals with internal variables inferred from the collocation method

    NASA Astrophysics Data System (ADS)

    Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.

    2012-03-01

    The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.

  2. B-spline soliton solution of the fifth order KdV type equations

    NASA Astrophysics Data System (ADS)

    Zahra, W. K.; Ouf, W. A.; El-Azab, M. S.

    2013-10-01

    In this paper, we develop a numerical solution based on sextic B-spline collocation method for solving the generalized fifth-order nonlinear evolution equations. Applying Von-Neumann stability analysis, the proposed technique is shown to be unconditionally stable. The accuracy of the presented method is demonstrated by a test problem. The numerical results are found to be in good agreement with the exact solution.

  3. A Spectral Legendre-Gauss-Lobatto Collocation Method for a Space-Fractional Advection Diffusion Equations with Variable Coefficients

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.; Baleanu, D.

    2013-10-01

    An efficient Legendre-Gauss-Lobatto collocation (L-GL-C) method is applied to solve the space-fractional advection diffusion equation with nonhomogeneous initial-boundary conditions. The Legendre-Gauss-Lobatto points are used as collocation nodes for spatial fractional derivatives as well as the Caputo fractional derivative. This approach is reducing the problem to the solution of a system of ordinary differential equations in time which can be solved by using any standard numerical techniques. The proposed numerical solutions when compared with the exact solutions reveal that the obtained solution produces highly accurate results. The results show that the proposed method has high accuracy and is efficient for solving the space-fractional advection diffusion equation.

  4. Extended cubic B-spline method for solving a linear system of second-order boundary value problems.

    PubMed

    Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md

    2016-01-01

    A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods.

  5. Quantification of the spatial strain distribution of scoliosis using a thin-plate spline method.

    PubMed

    Kiriyama, Yoshimori; Watanabe, Kota; Matsumoto, Morio; Toyama, Yoshiaki; Nagura, Takeo

    2014-01-03

    The objective of this study was to quantify the three-dimensional spatial strain distribution of a scoliotic spine by nonhomogeneous transformation without using a statistically averaged reference spine. The shape of the scoliotic spine was determined from computed tomography images from a female patient with adolescent idiopathic scoliosis. The shape of the scoliotic spine was enclosed in a rectangular grid, and symmetrized using a thin-plate spline method according to the node positions of the grid. The node positions of the grid were determined by numerical optimization to satisfy symmetry. The obtained symmetric spinal shape was enclosed within a new rectangular grid and distorted back to the original scoliotic shape using a thin-plate spline method. The distorted grid was compared to the rectangular grid that surrounded the symmetrical spine. Cobb's angle was reduced from 35° in the scoliotic spine to 7° in the symmetrized spine, and the scoliotic shape was almost fully symmetrized. The scoliotic spine showed a complex Green-Lagrange strain distribution in three dimensions. The vertical and transverse compressive/tensile strains in the frontal plane were consistent with the major scoliotic deformation. The compressive, tensile and shear strains on the convex side of the apical vertebra were opposite to those on the concave side. These results indicate that the proposed method can be used to quantify the three-dimensional spatial strain distribution of a scoliotic spine, and may be useful in quantifying the deformity of scoliosis.

  6. Thermal analysis of a fully wet porous radial fin with natural convection and radiation using the spectral collocation method

    NASA Astrophysics Data System (ADS)

    Khani, F.; Darvishi, M. T.; Gorla, R. S.. R.; Gireesha, B. J.

    2016-05-01

    Heat transfer with natural convection and radiation effect on a fully wet porous radial fin is considered. The radial velocity of the buoyancy driven flow at any radial location is obtained by applying Darcy's law. The obtained non-dimensionalized ordinary differential equation involving three highly nonlinear terms is solved numerically with the spectral collocation method. In this approach, the dimensionless temperature is approximated by Chebyshev polynomials and discretized by Chebyshev-Gausse-Lobatto collocation points. A particular algorithm is used to reduce the nonlinearity of the conservation of energy equation. The present analysis characterizes the effect of ambient temperature in different ways and it provides a better picture regarding the effect of ambient temperature on the thermal performance of the fin. The profiles for temperature distributions and dimensionless base heat flow are obtained for different parameters which influence the heat transfer rate.

  7. Shape Optimization for Drag Reduction in Linked Bodies using Evolution Strategies and the Hybrid Wavelet Collocation - Brinkman Penalization Method

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Gazzola, Mattia; Koumoutsakos, Petros

    2010-11-01

    In this talk we discuss preliminary results for the use of hybrid wavelet collocation - Brinkman penalization approach for shape optimization for drag reduction in flows past linked bodies. This optimization relies on Adaptive Wavelet Collocation Method along with the Brinkman penalization technique and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Adaptive wavelet collocation method tackles the problem of efficiently resolving a fluid flow on a dynamically adaptive computational grid, while a level set approach is used to describe the body shape and the Brinkman volume penalization allows for an easy variation of flow geometry without requiring body-fitted meshes. We perform 2D simulations of linked bodies in order to investigate whether flat geometries are optimal for drag reduction. In order to accelerate the costly cost function evaluations we exploit the inherent parallelism of ES and we extend the CMA-ES implementation to a multi-host framework. This framework allows for an easy distribution of the cost function evaluations across several parallel architectures and it is not limited to only one computing facility. The resulting optimal shapes are geometrically consistent with the shapes that have been obtained in the pioneering wind tunnel experiments for drag reduction using Evolution Strategies by Ingo Rechenberg.

  8. Numerical modeling of acoustic timescale detonation initiation using the Adaptive Wavelet-Collocation Method

    NASA Astrophysics Data System (ADS)

    Regele, Jonathan D.

    Multi-dimensional numerical modeling of detonation initiation is the primary goal of this thesis. The particular scenario under examination is initiating a detonation wave through acoustic timescale thermal power deposition. Physically this would correspond to igniting a reactive mixture with a laser pulse as opposed to a typical electric spark. Numerous spatial and temporal scales are involved, which makes these problems computationally challenging to solve. In order to model these problems, a shock capturing scheme is developed that utilizes the computational efficiency of the Adaptive Wavelet-Collocation Method (AWCM) to properly handle the multiple scales involved. With this technique, previous one-dimensional problems with unphysically small activation energies are revisited and simulated with the AWCM. The results demonstrate a qualitative agreement with previous work that used a uniform grid MacCormack scheme. Both sets of data show the basic sequence of events that are needed in order for a DDT process to occur. Instead of starting with a strong shock-coupled reaction zone as many other studies have done, the initial pulse is weak enough to allow the shock and the reaction zone to decouple. Reflected compression waves generated by the inertially confined reaction zone lead to localized reaction centers, which eventually explode and further accelerate the process. A shock-coupled reaction zone forms an initially overdriven detonation, which relaxes to a steady CJ wave. The one-dimensional problems are extended to two dimensions using a circular heat deposition in a channel. Two-dimensional results demonstrate the same sequence of events, which suggests that the concepts developed in the original one-dimensional work are applicable to multiple dimensions.

  9. Isogeometric Collocation: Cost Comparison with Galerkin Methods and Extension to Adaptive Hierarchical NURBS Discretizations (Preprint)

    DTIC Science & Technology

    2013-02-06

    levels, the ratio saturates to an asymptotic value acceptably “close” to its optimum reff=1.0. Second, we focus on the 2D advection benchmark discussed in...112] D. Forsey and R.H. Bartels. Hierarchical B-spline refinement. Computer Graphics (SIGGRAPH ’88 Proceedings), 22(4):205–212, 1988. [113] R. Kraft

  10. Graph coarsening methods for Karush-Kuhn-Tucker matrices arising in orthogonal collocation of optimal control problems

    NASA Astrophysics Data System (ADS)

    Senses, Begum

    A state-defect constraint pairing graph coarsening method is described for improving computational efficiency during the numerical factorization of large sparse Karush-Kuhn-Tucker matrices that arise from the discretization of optimal control problems via a Legendre-Gauss-Radau orthogonal collocation method. The method takes advantage of the particular sparse structure of the Karush-Kuhn-Tucker matrix that arises from the orthogonal collocation method. The state-defect constraint pairing graph coarsening method pairs each component of the state with its corresponding defect constraint and forces paired rows to be adjacent in the reordered Karush-Kuhn-Tucker matrix. Aggregate state-defect constraint pairing results are presented using a wide variety of benchmark optimal control problems where it is found that the proposed state-defect constraint pairing graph coarsening method significantly reduces both the number of delayed pivots and the number of floating point operations and increases the computational efficiency by performing more floating point operations per unit time. It is then shown that the state-defect constraint pairing graph coarsening method is less effective on Karush-Kuhn-Tucker matrices arising from Legendre-Gauss-Radau collocation when the optimal control problem contains state and control equality path constraints because such matrices may have delayed pivots that correspond to both defect and path constraints. An unweighted alternate graph coarsening method that employs maximal matching and a weighted alternate graph coarsening method that employs Hungarian algorithm on a weighting matrix are then used to attempt to further reduce the number of delayed pivots. It is found, however, that these alternate graph coarsening methods provide no further advantage over the state-defect constraint pairing graph coarsening method.

  11. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline

    PubMed Central

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903

  12. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.

    PubMed

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases.

  13. A direct multi-step Legendre-Gauss collocation method for high-order Volterra integro-differential equation

    NASA Astrophysics Data System (ADS)

    Kajani, M. Tavassoli; Gholampoor, I.

    2015-10-01

    The purpose of this study is to present a new direct method for the approximate solution and approximate derivatives up to order k to the solution for kth-order Volterra integro-differential equations with a regular kernel. This method is based on the approximation by shifting the original problem into a sequence of subintervals. A Legendre-Gauss-Lobatto collocation method is proposed to solving the Volterra integro-differential equation. Numerical examples show that the approximate solutions have a good degree of accuracy.

  14. Characteristics method with cubic-spline interpolation for open channel flow computation

    NASA Astrophysics Data System (ADS)

    Tsai, Tung-Lin; Chiang, Shih-Wei; Yang, Jinn-Chuang

    2004-10-01

    In the framework of the specified-time-interval scheme, the accuracy of the characteristic method is greatly related to the form of the interpolation. The linear interpolation was commonly used to couple the characteristics method (LI method) in open channel flow computation. The LI method is easy to implement, but it leads to an inevitable smoothing of the solution. The characteristics method with the Hermite cubic interpolation (HP method, originally developed by Holly and Preissmann, 1977) was then proposed to largely reduce the error induced by the LI method. In this paper, the cubic-spline interpolation on the space line or on the time line is employed to integrate with characteristics method (CS method) for unsteady flow computation in open channel. Two hypothetical examples, including gradually and rapidly varied flows, are used to examine the applicability of the CS method as compared with the LI method, the HP method, and the analytical solutions. The simulated results show that the CS method is comparable to the HP method and more accurate than the LI method. Without tackling the additional equations for spatial or temporal derivatives, the CS method is easier to implement and more efficient than the HP method.

  15. High-order numerical solutions using cubic splines

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.

  16. Splines on fractals

    NASA Astrophysics Data System (ADS)

    Strichartz, Robert S.; Usher, Michael

    2000-09-01

    A general theory of piecewise multiharmonic splines is constructed for a class of fractals (post-critically finite) that includes the familiar Sierpinski gasket, based on Kigami's theory of Laplacians on these fractals. The spline spaces are the analogues of the spaces of piecewise Cj polynomials of degree 2j + 1 on an interval, with nodes at dyadic rational points. We give explicit algorithms for effectively computing multiharmonic functions (solutions of [Delta]j+1u = 0) and for constructing bases for the spline spaces (for general fractals we need to assume that j is odd), and also for computing inner products of these functions. This enables us to give a finite element method for the approximate solution of fractal differential equations. We give the analogue of Simpson's method for numerical integration on the Sierpinski gasket. We use splines to approximate functions vanishing on the boundary by functions vanishing in a neighbourhood of the boundary.

  17. Chebyshev collocation spectral method for one-dimensional radiative heat transfer in linearly anisotropic-scattering cylindrical medium

    NASA Astrophysics Data System (ADS)

    Zhou, Rui-Rui; Li, Ben-Wen

    2017-03-01

    In this study, the Chebyshev collocation spectral method (CCSM) is developed to solve the radiative integro-differential transfer equation (RIDTE) for one-dimensional absorbing, emitting and linearly anisotropic-scattering cylindrical medium. The general form of quadrature formulas for Chebyshev collocation points is deduced. These formulas are proved to have the same accuracy as the Gauss-Legendre quadrature formula (GLQF) for the F-function (geometric function) in the RIDTE. The explicit expressions of the Lagrange basis polynomials and the differentiation matrices for Chebyshev collocation points are also given. These expressions are necessary for solving an integro-differential equation by the CCSM. Since the integrand in the RIDTE is continuous but non-smooth, it is treated by the segments integration method (SIM). The derivative terms in the RIDTE are carried out to improve the accuracy near the origin. In this way, a fourth order accuracy is achieved by the CCSM for the RIDTE, whereas it's only a second order one by the finite difference method (FDM). Several benchmark problems (BPs) with various combinations of optical thickness, medium temperature distribution, degree of anisotropy, and scattering albedo are solved. The results show that present CCSM is efficient to obtain high accurate results, especially for the optically thin medium. The solutions rounded to seven significant digits are given in tabular form, and show excellent agreement with the published data. Finally, the solutions of RIDTE are used as benchmarks for the solution of radiative integral transfer equations (RITEs) presented by Sutton and Chen (JQSRT 84 (2004) 65-103). A non-uniform grid refined near the wall is advised to improve the accuracy of RITEs solutions.

  18. Spectral-collocation variational integrators

    NASA Astrophysics Data System (ADS)

    Li, Yiqun; Wu, Boying; Leok, Melvin

    2017-03-01

    Spectral methods are a popular choice for constructing numerical approximations for smooth problems, as they can achieve geometric rates of convergence and have a relatively small memory footprint. In this paper, we introduce a general framework to convert a spectral-collocation method into a shooting-based variational integrator for Hamiltonian systems. We also compare the proposed spectral-collocation variational integrators to spectral-collocation methods and Galerkin spectral variational integrators in terms of their ability to reproduce accurate trajectories in configuration and phase space, their ability to conserve momentum and energy, as well as the relative computational efficiency of these methods when applied to some classical Hamiltonian systems. In particular, we note that spectrally-accurate variational integrators, such as the Galerkin spectral variational integrators and the spectral-collocation variational integrators, combine the computational efficiency of spectral methods together with the geometric structure-preserving and long-time structural stability properties of symplectic integrators.

  19. Spline Histogram Method for Reconstruction of Probability Density Functions of Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Docenko, Dmitrijs; Berzins, Karlis

    We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from www.virac.lv/en/soft.html.

  20. A collocated grid finite volume method for aeroacoustic computations of low-speed flows

    NASA Astrophysics Data System (ADS)

    Shen, Wen Zhong; Michelsen, Jess A.; Sørensen, Jens Nørkær

    2004-05-01

    A numerical algorithm for simulation of acoustic noise generation, based on collocated grids, is described. The approach, that was originally developed using a viscous/inviscid decomposition technique, involves two steps comprising a viscous incompressible flow part and an inviscid or viscous acoustic part. On collocated grids the inviscid solution is found to be mesh dependent due to unavoidable extrapolations of the acoustic pressure and density at walls, differing from the case on staggered grids where no extrapolation is needed. The situation is most pronounced when a sharp body is considered. A viscous acoustic algorithm is proposed to overcome the difficulty. Numerical computations of flows past a circular cylinder and a NACA 0015 airfoil show that a viscous/viscous coupling is more natural and gives excellent results as compared to those obtained in previous computations based on viscous/inviscid coupling on staggered grids. The model is applied to the problem of an airfoil exposed to a gust and results are compared to the numerical results of Lockard and Morris [AIAA J. 36(6) (1998) 907].

  1. A stochastic collocation method for uncertainty quantification and propagation in cardiovascular simulations.

    PubMed

    Sankaran, Sethuraman; Marsden, Alison L

    2011-03-01

    Simulations of blood flow in both healthy and diseased vascular models can be used to compute a range of hemodynamic parameters including velocities, time varying wall shear stress, pressure drops, and energy losses. The confidence in the data output from cardiovascular simulations depends directly on our level of certainty in simulation input parameters. In this work, we develop a general set of tools to evaluate the sensitivity of output parameters to input uncertainties in cardiovascular simulations. Uncertainties can arise from boundary conditions, geometrical parameters, or clinical data. These uncertainties result in a range of possible outputs which are quantified using probability density functions (PDFs). The objective is to systemically model the input uncertainties and quantify the confidence in the output of hemodynamic simulations. Input uncertainties are quantified and mapped to the stochastic space using the stochastic collocation technique. We develop an adaptive collocation algorithm for Gauss-Lobatto-Chebyshev grid points that significantly reduces computational cost. This analysis is performed on two idealized problems--an abdominal aortic aneurysm and a carotid artery bifurcation, and one patient specific problem--a Fontan procedure for congenital heart defects. In each case, relevant hemodynamic features are extracted and their uncertainty is quantified. Uncertainty quantification of the hemodynamic simulations is done using (a) stochastic space representations, (b) PDFs, and (c) the confidence intervals for a specified level of confidence in each problem.

  2. Solution of three-dimensional flow problems using a flux-spline method

    NASA Technical Reports Server (NTRS)

    Karki, K.; Mongia, H.; Patankar, S.

    1989-01-01

    This paper reports the application of a flux-spline scheme to three-dimensional fluid flow problems. The performance of this scheme is contrasted with that of the power-law differencing scheme. The numerical results are compared with reference solutions available in the literature. For the problems considered in this study, the flux-spline scheme is significantly more accurate than the power-law scheme.

  3. Evaluation of optimization methods for nonrigid medical image registration using mutual information and B-splines.

    PubMed

    Klein, Stefan; Staring, Marius; Pluim, Josien P W

    2007-12-01

    A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic B-splines. The coordinate mapping that relates the two images is found using an iterative optimization procedure. This work compares the performance of eight optimization methods: gradient descent (with two different step size selection algorithms), quasi-Newton, nonlinear conjugate gradient, Kiefer-Wolfowitz, simultaneous perturbation, Robbins-Monro, and evolution strategy. Special attention is paid to computation time reduction by using fewer voxels to calculate the cost function and its derivatives. The optimization methods are tested on manually deformed CT images of the heart, on follow-up CT chest scans, and on MR scans of the prostate acquired using a BFFE, T1, and T2 protocol. Registration accuracy is assessed by computing the overlap of segmented edges. Precision and convergence properties are studied by comparing deformation fields. The results show that the Robbins-Monro method is the best choice in most applications. With this approach, the computation time per iteration can be lowered approximately 500 times without affecting the rate of convergence by using a small subset of the image, randomly selected in every iteration, to compute the derivative of the mutual information. From the other methods the quasi-Newton and the nonlinear conjugate gradient method achieve a slightly higher precision, at the price of larger computation times.

  4. Numerical solutions and error estimations for the space fractional diffusion equation with variable coefficients via Fibonacci collocation method.

    PubMed

    Bahşı, Ayşe Kurt; Yalçınbaş, Salih

    2016-01-01

    In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method.

  5. B-spline methods and zonal grids for numerical simulations of turbulent flows

    NASA Astrophysics Data System (ADS)

    Kravchenko, Arthur Grigorievich

    1998-12-01

    A novel numerical technique is developed for simulations of complex turbulent flows on zonal embedded grids. This technique is based on the Galerkin method with basis functions constructed using B-splines. The technique permits fine meshes to be embedded in physically significant flow regions without placing a large number of grid points in the rest of the computational domain. The numerical technique has been tested successfully in simulations of a fully developed turbulent channel flow. Large eddy simulations of turbulent channel flow at Reynolds numbers up to Rec = 110,000 (based on centerline velocity and channel half-width) show good agreement with the existing experimental data. These tests indicate that the method provides an efficient information transfer between zones without accumulation of errors in the regions of sudden grid changes. The numerical solutions on multi-zone grids are of the same accuracy as those on a single-zone grid but require less computer resources. The performance of the numerical method in a generalized coordinate system is assessed in simulations of laminar flows over a circular cylinder at low Reynolds numbers and three-dimensional simulations at ReD = 300 (based on free-stream velocity and cylinder diameter). The drag coefficients, the size of the recirculation region, and the vortex shedding frequency all agree well with the experimental data and previous simulations of these flows. Large eddy simulations of a flow over a circular cylinder at a sub-critical Reynolds number, ReD = 3900, are performed and compared with previous upwind-biased and central finite-difference computations. In the very near-wake, all three simulations are in agreement with each other and agree fairly well with the PIV experimental data of Lourenco & Shih (1993). Farther downstream, the results of the B- spline computations are in better agreement with the hot- wire experiment of Ong & Wallace (1996) than those obtained in finite-difference simulations

  6. Examination of the Circle Spline Routine

    NASA Technical Reports Server (NTRS)

    Dolin, R. M.; Jaeger, D. L.

    1985-01-01

    The Circle Spline routine is currently being used for generating both two and three dimensional spline curves. It was developed for use in ESCHER, a mesh generating routine written to provide a computationally simple and efficient method for building meshes along curved surfaces. Circle Spline is a parametric linear blending spline. Because many computerized machining operations involve circular shapes, the Circle Spline is well suited for both the design and manufacturing processes and shows promise as an alternative to the spline methods currently supported by the Initial Graphics Specification (IGES).

  7. Structural break detection method based on the Adaptive Regression Splines technique

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Zimroz, Radosław

    2017-04-01

    For many real data, long term observation consists of different processes that coexist or occur one after the other. Those processes very often exhibit different statistical properties and thus before the further analysis the observed data should be segmented. This problem one can find in different applications and therefore new segmentation techniques have been appeared in the literature during last years. In this paper we propose a new method of time series segmentation, i.e. extraction from the analysed vector of observations homogeneous parts with similar behaviour. This method is based on the absolute deviation about the median of the signal and is an extension of the previously proposed techniques also based on the simple statistics. In this paper we introduce the method of structural break point detection which is based on the Adaptive Regression Splines technique, one of the form of regression analysis. Moreover we propose also the statistical test which allows testing hypothesis of behaviour related to different regimes. First, the methodology we apply to the simulated signals with different distributions in order to show the effectiveness of the new technique. Next, in the application part we analyse the real data set that represents the vibration signal from a heavy duty crusher used in a mineral processing plant.

  8. A two-parameter continuation algorithm using radial basis function collocation method for rotating Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Shih, Yin-Tzer; Tsai, Chih-Ching

    2013-11-01

    We present an efficient numerical algorithm to observe the dynamical formation of vortex lattice of a rotating trapped Bose-Einstein condensate (BEC) via solving a two-dimensional Gross-Pitaevskii equation (GPE). We use a radial basis function collocation method (RBFCM) to discretize a two-dimensional coupled nonlinear Schrödinger equation (CNLSE) derived from the GPE. A two-parameter continuation algorithm is implemented here to trace the solution curve of the CNLSE. The numerical experiments show promise of the proposed method to observe a variety of vortices of a rotating BEC in optical lattices, and depict the densities of superfluid and the solution curves as a function of chemical potential and the rotation frequency for various vortex structures. This algorithm provides an efficient method for observing complicated convex structures and dynamical phenomena of vortices in rotating BEC when comparing with those existing numerical methods in the literature.

  9. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    NASA Astrophysics Data System (ADS)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates

  10. Mass and Momentum Conservation of the Least-Squares Spectral Collocation Method for the Time-Dependent Stokes Equations

    NASA Astrophysics Data System (ADS)

    Kattelans, Thorsten; Heinrichs, Wilhelm

    2009-09-01

    For Stokes problems least-squares schemes have the big advantage that they require no stabilization and equal order interpolation can be used. The disadvantage of Least-Squares Finite Element Method (LSFEM) and of Least-Squares Spectral Element Method (LSSEM) is that they perform poorly with respect to conservation of mass for internal flow problems, where the LSSEM compensates this by a superior conservation of momentum. In the literature it has been shown that Least-Squares Spectral Collocation Method (LSSCM) leads to superior conservation of mass and momentum for the steady Stokes. Here, we extend the study to the time-dependent Stokes equations for an internal flow problem, where the domain is decomposed into different elements using the transfinite mapping of Gordon and Hall. Minimizing the influence of round-off errors we use QR decomposition for solving the resulting overdetermined algebraic systems instead of forming normal equations.

  11. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  12. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1985-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  13. A shifted Jacobi collocation algorithm for wave type equations with non-local conservation conditions

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohammed A.

    2014-09-01

    In this paper, we propose an efficient spectral collocation algorithm to solve numerically wave type equations subject to initial, boundary and non-local conservation conditions. The shifted Jacobi pseudospectral approximation is investigated for the discretization of the spatial variable of such equations. It possesses spectral accuracy in the spatial variable. The shifted Jacobi-Gauss-Lobatto (SJ-GL) quadrature rule is established for treating the non-local conservation conditions, and then the problem with its initial and non-local boundary conditions are reduced to a system of second-order ordinary differential equations in temporal variable. This system is solved by two-stage forth-order A-stable implicit RK scheme. Five numerical examples with comparisons are given. The computational results demonstrate that the proposed algorithm is more accurate than finite difference method, method of lines and spline collocation approach

  14. A multi-domain Chebyshev collocation method for predicting ultrasonic field parameters in complex material geometries.

    PubMed

    Nielsen, S A; Hesthaven, J S

    2002-05-01

    The use of ultrasound to measure elastic field parameters as well as to detect cracks in solid materials has received much attention, and new important applications have been developed recently, e.g., the use of laser generated ultrasound in non-destructive evaluation (NDE). To model such applications requires a realistic calculation of field parameters in complex geometries with discontinuous, layered materials. In this paper we present an approach for solving the elastic wave equation in complex geometries with discontinuous layered materials. The approach is based on a pseudospectral elastodynamic formulation, giving a direct solution of the time-domain elastodynamic equations. A typical calculation is performed by decomposing the global computational domain into a number of subdomains. Every subdomain is then mapped on a unit square using transfinite blending functions and spatial derivatives are calculated efficiently by a Chebyshev collocation scheme. This enables that the elastodynamic equations can be solved within spectral accuracy, and furthermore, complex interfaces can be approximated smoothly, hence avoiding staircasing. A global solution is constructed from the local solutions by means of characteristic variables. Finally, the global solution is advanced in time using a fourth order Runge-Kutta scheme. Examples of field prediction in discontinuous solids with complex geometries are given and related to ultrasonic NDE.

  15. Mercury vapor air-surface exchange measured by collocated micrometeorological and enclosure methods - Part I: Data comparability and method characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2015-01-01

    Reliable quantification of air-biosphere exchange flux of elemental mercury vapor (Hg0) is crucial for understanding the global biogeochemical cycle of mercury. However, there has not been a standard analytical protocol for flux quantification, and little attention has been devoted to characterize the temporal variability and comparability of fluxes measured by different methods. In this study, we deployed a collocated set of micrometeorological (MM) and dynamic flux chamber (DFC) measurement systems to quantify Hg0 flux over bare soil and low standing crop in an agricultural field. The techniques include relaxed eddy accumulation (REA), modified Bowen ratio (MBR), aerodynamic gradient (AGM) as well as dynamic flux chambers of traditional (TDFC) and novel (NDFC) designs. The five systems and their measured fluxes were cross-examined with respect to magnitude, temporal trend and correlation with environmental variables. Fluxes measured by the MM and DFC methods showed distinct temporal trends. The former exhibited a highly dynamic temporal variability while the latter had much more gradual temporal features. The diurnal characteristics reflected the difference in the fundamental processes driving the measurements. The correlations between NDFC and TDFC fluxes and between MBR and AGM fluxes were significant (R>0.8, p<0.05), but the correlation between DFC and MM fluxes were from weak to moderate (R=0.1-0.5). Statistical analysis indicated that the median of turbulent fluxes estimated by the three independent MM techniques were not significantly different. Cumulative flux measured by TDFC is considerably lower (42% of AGM and 31% of MBR fluxes) while those measured by NDFC, AGM and MBR were similar (<10% difference). This suggests that incorporating an atmospheric turbulence property such as friction velocity for correcting the DFC-measured flux effectively bridged the gap between the Hg0 fluxes measured by enclosure and MM techniques. Cumulated flux measured by REA

  16. Algebraic grid adaptation method using non-uniform rational B-spline surface modeling

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, B. K.

    1992-01-01

    An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.

  17. Jacobi-Gauss-Lobatto collocation method for the numerical solution of 1+1 nonlinear Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Abdelkawy, M. A.; Van Gorder, Robert A.

    2014-03-01

    A Jacobi-Gauss-Lobatto collocation (J-GL-C) method, used in combination with the implicit Runge-Kutta method of fourth order, is proposed as a numerical algorithm for the approximation of solutions to nonlinear Schrödinger equations (NLSE) with initial-boundary data in 1+1 dimensions. Our procedure is implemented in two successive steps. In the first one, the J-GL-C is employed for approximating the functional dependence on the spatial variable, using (N-1) nodes of the Jacobi-Gauss-Lobatto interpolation which depends upon two general Jacobi parameters. The resulting equations together with the two-point boundary conditions induce a system of 2(N-1) first-order ordinary differential equations (ODEs) in time. In the second step, the implicit Runge-Kutta method of fourth order is applied to solve this temporal system. The proposed J-GL-C method, used in combination with the implicit Runge-Kutta method of fourth order, is employed to obtain highly accurate numerical approximations to four types of NLSE, including the attractive and repulsive NLSE and a Gross-Pitaevskii equation with space-periodic potential. The numerical results obtained by this algorithm have been compared with various exact solutions in order to demonstrate the accuracy and efficiency of the proposed method. Indeed, for relatively few nodes used, the absolute error in our numerical solutions is sufficiently small.

  18. Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis

    SciTech Connect

    Fillion-Gourdeau, F.; Lorin, E.; Bandrauk, A.D.

    2016-02-15

    A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron–molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.

  19. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images

    PubMed Central

    Quirós, Elia; Felicísimo, Ángel M.; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550

  20. A new TEC interpolation method based on the least squares collocation for high accuracy regional ionospheric maps

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Paweł; Jarmołowski, Wojciech

    2017-04-01

    The ionosphere plays a crucial role in space weather that affects satellite navigation as the ionospheric delay is one of the major errors in GNSS. On the other hand, GNSS observations are widely used to determine the amount of ionospheric total electron content (TEC). An important aspect in the electron content estimation at regional and global scale is adopting the appropriate interpolation strategy. In this paper we propose and validate a new method for regional TEC modeling based on least squares collocation (LSC) with noise variance estimation. This method allows for providing accurate TEC maps with high spatial and temporal resolution. Such maps may be used to support precise GNSS positioning and navigation, e.g. in RTK mode and also in the ionosphere studies. To test applicability of new TEC maps to positioning, double-difference ionospheric corrections were derived from the maps and their accuracy was analyzed. In addition, the corrections were applied to GNSS positioning and validated in ambiguity resolution domain. The tests were carried out during a strong ionospheric storm when the ionosphere is particularly difficult to model. The performance of the new approach was compared to IGS and UPC global, and CODE regional TEC maps. The results showed an advantage of our solution with resulting accuracy of the relative ionospheric corrections usually better than 10 cm, even during the ionospheric disturbances. This proves suitability of our regional TEC maps for, e.g. supporting fast ambiguity resolution in kinematic GNSS positioning.

  1. A nonclassical Radau collocation method for solving the Lane-Emden equations of the polytropic index 4.75 ≤ α < 5

    NASA Astrophysics Data System (ADS)

    Tirani, M. D.; Maleki, M.; Kajani, M. T.

    2014-11-01

    A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.

  2. Split spline screw

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.

  3. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  4. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  5. Conforming Chebyshev spectral collocation methods for the solution of laminar flow in a constricted channel

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1990-01-01

    The numerical simulation of steady planar two-dimensional, laminar flow of an incompressible fluid through an abruptly contracting channel using spectral domain decomposition methods is described. The key features of the method are the decomposition of the flow region into a number of rectangular subregions and spectral approximations which are pointwise C(1) continuous across subregion interfaces. Spectral approximations to the solution are obtained for Reynolds numbers in the range 0 to 500. The size of the salient corner vortex decreases as the Reynolds number increases from 0 to around 45. As the Reynolds number is increased further the vortex grows slowly. A vortex is detected downstream of the contraction at a Reynolds number of around 175 that continues to grow as the Reynolds number is increased further.

  6. Numerical Modelling of a Functional Differential Equation with Deviating Arguments Using a Collocation Method

    SciTech Connect

    Teodoro, M. F.; Ford, N. J.; Lumb, P.; Lima, P. M.

    2008-09-01

    This paper is concerned with the approximate solution of a functional differential equation of the form: x'(t) {alpha}(t)x(t)+{beta}(t)x(t-1)+{gamma}(t)x(t+1). We search for a solution x, defined for t(set-membership sign)[-1,k],(k(set-membership sign)N), which takes given values onn the intervals [-1,0] and (k-1,k]. Continuing the work started in [10], we introduce and anlyse some new computational methods for the solution of this problem which are applicable both in the case of constant and variable coefficients. Numerical results are presented and compared with the results obtained by other methods.

  7. Pseudospectral Collocation Methods for the Direct Transcription of Optimal Control Problems

    DTIC Science & Technology

    2003-04-01

    solving optimal control problems for trajectory optimization, spacecraft attitude control, jet thruster control, missile guidance and many other... optimal control problems using a pseudospectral direct transcription method. These problems are stated here so that they may be referred to elsewhere...e.g., [7]. 2.3 Prototypical Examples Throughout this thesis two example problems are used to demonstrate various prop- erties associated with solving

  8. Monotone and convex quadratic spline interpolation

    NASA Technical Reports Server (NTRS)

    Lam, Maria H.

    1990-01-01

    A method for producing interpolants that preserve the monotonicity and convexity of discrete data is described. It utilizes the quadratic spline proposed by Schumaker (1983) which was subsequently characterized by De Vore and Yan (1986). The selection of first order derivatives at the given data points is essential to this spline. An observation made by De Vore and Yan is generalized, and an improved method to select these derivatives is proposed. The resulting spline is completely local, efficient, and simple to implement.

  9. Regularization of B-Spline Objects.

    PubMed

    Xu, Guoliang; Bajaj, Chandrajit

    2011-01-01

    By a d-dimensional B-spline object (denoted as ), we mean a B-spline curve (d = 1), a B-spline surface (d = 2) or a B-spline volume (d = 3). By regularization of a B-spline object we mean the process of relocating the control points of such that they approximate an isometric map of its definition domain in certain directions and is shape preserving. In this paper we develop an efficient regularization method for , d = 1, 2, 3 based on solving weak form L(2)-gradient flows constructed from the minimization of certain regularizing energy functionals. These flows are integrated via the finite element method using B-spline basis functions. Our experimental results demonstrate that our new regularization method is very effective.

  10. LUPOD: Collocation in POD via LU decomposition

    NASA Astrophysics Data System (ADS)

    Rapún, M.-L.; Terragni, F.; Vega, J. M.

    2017-04-01

    A collocation method is developed for the (truncated) POD of a set of snapshots. In other words, POD computations are performed using only a set of collocation points, whose number is comparable to the number of retained modes, in a similar fashion as in collocation spectral methods. Intending to rely on simple ideas which, moreover, are consistent with the essence of POD, collocation points are computed via the LU decomposition with pivoting of the snapshot matrix. The new method is illustrated in simple applications in which POD is used as a data-processing method. The performance of the method is tested in the computationally efficient construction of reduced order models based on POD plus Galerkin projection for the complex Ginzburg-Landau equation in one and two space dimensions.

  11. Shape Preserving Spline Interpolation

    NASA Technical Reports Server (NTRS)

    Gregory, J. A.

    1985-01-01

    A rational spline solution to the problem of shape preserving interpolation is discussed. The rational spline is represented in terms of first derivative values at the knots and provides an alternative to the spline-under-tension. The idea of making the shape control parameters dependent on the first derivative unknowns is then explored. The monotonic or convex shape of the interpolation data can then be preserved automatically through the solution of the resulting non-linear consistency equations of the spline.

  12. Shape identification technique for a two-dimensional elliptic system by boundary integral equation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1989-01-01

    The geometrical structure of the boundary shape for a two-dimensional boundary value problem is identified. The output least square identification method is considered for estimating partially unknown boundary shapes. A numerical parameter estimation technique using the spline collocation method is proposed.

  13. Exponential splines: A survey

    SciTech Connect

    McCartin, B.J.

    1996-12-31

    Herein, we discuss a generalization of the semiclassical cubic spline known in the literature as the exponential spline. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain {open_quotes}tension{close_quotes} parameters. We first outline the theoretical underpinnings of the exponential spline. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. We next discuss the numerical computation of the exponential spline. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines. We conclude with a consideration of the broad spectrum of possible uses of exponential splines in the applications. Our primary emphasis is on computational fluid dynamics although the imaginative reader will recognize the wider generality of the techniques developed.

  14. Quarter-sweep Gauss-Seidel method with quadratic spline scheme applied to fourth order two-point boundary value problems

    NASA Astrophysics Data System (ADS)

    Mohd Fauzi, Norizyan Izzati; Sulaiman, Jumat

    2013-04-01

    The aim of this paper is to describe the application of Quarter-Sweep Gauss-Seidel (QSGS) iterative method using quadratic spline scheme for solving fourth order two-point linear boundary value problems. In the line to derive approximation equations, firstly the fourth order problems need to be reduced onto a system of second-order two-point boundary value problems. Then two linear systems have been constructed via discretization process by using the corresponding quarter-sweep quadratic spline approximation equations. The generated linear systems have been solved using the proposed QSGS iterative method to show the superiority over Full-Sweep Gauss-Seidel (FSGS) and Half-Sweep Gauss-Seidel (HSGS) methods. Computational results are provided to illustrate that the effectiveness of the proposed QSGS method is more superior in terms of computational time and number of iterations as compared to other tested methods.

  15. A Full-Relativistic B-Spline R-Matrix Method for Electron and Photon Collisions with Atoms and Ions

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus

    2008-05-01

    We have extended our B-spline R-matrix (close-coupling) method [1] to fully account for relativistic effects in a Dirac-Coulomb formulation. Our numerical implementation of the close-coupling method enables us to construct term-dependent, non-orthogonal sets of one-electron orbitals for the bound and continuum electrons. This is a critical aspect for complex targets, where individually optimized one-electron orbitals can significantly reduce the size of the multi-configuration expansions needed for an accurate target description. Furthermore, core-valence correlation effets are treated fully ab initio, rather than through semi-empirical, and usually local, model potentials. The method will be described in detail and illustrated by comparing our theoretical predictions for e-Cs collisions with benchmark experiments for angle-integrated and angle-differential cross sections [2], various spin-dependent scattering asymmetries [3], and Stokes parameters measured in superelastic collisions with laser-excited atoms [4]. [1] O. Zatsarinny, Comp. Phys. Commun. 174, 273 (2006). [2] W. Gehenn and E. Reichert, J. Phys. B 10, 3105 (1977). [3] G. Baum et al., Phys. Rev. A 66, 022705 (2002) and 70, 012707 (2004). [4] D.S. Slaughter et al., Phys. Rev. A 75, 062717 (2007).

  16. A Fully Relativistic B-Spline R-Matrix Method for Electron and Photon Collisions with Atoms and Ions

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus

    2008-10-01

    We have extended our B-spline R-matrix (close-coupling) method [1] to fully account for relativistic effects in a Dirac-Coulomb formulation. Our numerical implementation of the close-coupling method enables us to construct term-dependent, non-orthogonal sets of one-electron orbitals for the bound and continuum electrons. This is a critical aspect for complex targets, where individually optimized one-electron orbitals can significantly reduce the size of the multi-configuration expansions needed for an accurate target description. Core-valence correlation effets are treated fully ab initio, rather than through semi-empirical model potentials. The method is described in detail and will be illustrated by comparing our theoretical predictions for e-Cs collisions [2] with benchmark experiments for angle-integrated and angle-differential cross sections [3], various spin-dependent scattering asymmetries [4], and Stokes parameters measured in superelastic collisions with laser-excited atoms [5]. [1] O. Zatsarinny, Comp. Phys. Commun. 174, 273 (2006). [2] O. Zatsarinny and K. Bartschat, Phys. Rev. A 77, 062701 (2008). [3] W. Gehenn and E. Reichert, J. Phys. B 10, 3105 (1977). [4] G. Baum et al., Phys. Rev. A 66, 022705 (2002) and 70, 012707 (2004). [5] D.S. Slaughter et al., Phys. Rev. A 75, 062717 (2007).

  17. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches.

  18. Numerical Discretization-Based Estimation Methods for Ordinary Differential Equation Models via Penalized Spline Smoothing with Applications in Biomedical Research

    PubMed Central

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-01-01

    Summary Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this paper, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler’s method, trapezoidal rule and Runge-Kutta method. A higher order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators (DBE) are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods to an HIV study to further illustrate the usefulness of the proposed approaches. PMID:22376200

  19. Quantitative structure-activity relationship study on BTK inhibitors by modified multivariate adaptive regression spline and CoMSIA methods.

    PubMed

    Xu, A; Zhang, Y; Ran, T; Liu, H; Lu, S; Xu, J; Xiong, X; Jiang, Y; Lu, T; Chen, Y

    2015-01-01

    Bruton's tyrosine kinase (BTK) plays a crucial role in B-cell activation and development, and has emerged as a new molecular target for the treatment of autoimmune diseases and B-cell malignancies. In this study, two- and three-dimensional quantitative structure-activity relationship (2D and 3D-QSAR) analyses were performed on a series of pyridine and pyrimidine-based BTK inhibitors by means of genetic algorithm optimized multivariate adaptive regression spline (GA-MARS) and comparative molecular similarity index analysis (CoMSIA) methods. Here, we propose a modified MARS algorithm to develop 2D-QSAR models. The top ranked models showed satisfactory statistical results (2D-QSAR: Q(2) = 0.884, r(2) = 0.929, r(2)pred = 0.878; 3D-QSAR: q(2) = 0.616, r(2) = 0.987, r(2)pred = 0.905). Key descriptors selected by 2D-QSAR were in good agreement with the conclusions of 3D-QSAR, and the 3D-CoMSIA contour maps facilitated interpretation of the structure-activity relationship. A new molecular database was generated by molecular fragment replacement (MFR) and further evaluated with GA-MARS and CoMSIA prediction. Twenty-five pyridine and pyrimidine derivatives as novel potential BTK inhibitors were finally selected for further study. These results also demonstrated that our method can be a very efficient tool for the discovery of novel potent BTK inhibitors.

  20. COBS: COnstrained B-Splines

    NASA Astrophysics Data System (ADS)

    Ng, Pin T.; Maechler, Martin

    2015-05-01

    COBS (COnstrained B-Splines), written in R, creates constrained regression smoothing splines via linear programming and sparse matrices. The method has two important features: the number and location of knots for the spline fit are established using the likelihood-based Akaike Information Criterion (rather than a heuristic procedure); and fits can be made for quantiles (e.g. 25% and 75% as well as the usual 50%) in the response variable, which is valuable when the scatter is asymmetrical or non-Gaussian. This code is useful for, for example, estimating cluster ages when there is a wide spread in stellar ages at a chosen absorption, as a standard regression line does not give an effective measure of this relationship.

  1. A Locally Modal B-Spline Based Full-Vector Finite-Element Method with PML for Nonlinear and Lossy Plasmonic Waveguide

    NASA Astrophysics Data System (ADS)

    Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan

    2016-09-01

    In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.

  2. Weighted cubic and biharmonic splines

    NASA Astrophysics Data System (ADS)

    Kvasov, Boris; Kim, Tae-Wan

    2017-01-01

    In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.

  3. Uncertainty Quantification in Dynamic Simulations of Large-scale Power System Models using the High-Order Probabilistic Collocation Method on Sparse Grids

    SciTech Connect

    Lin, Guang; Elizondo, Marcelo A.; Lu, Shuai; Wan, Xiaoliang

    2014-01-01

    This paper proposes a probabilistic collocation method (PCM) to quantify the uncertainties with dynamic simulations in power systems. The appraoch was tested on a single-machine-infinite-bus system and the over 15,000 -bus Western Electricity Coordinating Council (WECC) system. Comparing to classic Monte-Carlo (MC) method, the proposed PCM applies the Smolyak algorithm to reduce the number of simulations that have to be performed. Therefore, the computational cost can be greatly reduced using PCM. The algorithm and procedures are described in the paper. Comparison was made with MC method on the single machine as well as the WECC system. The simulation results shows that using PCM only a small number of sparse grid points need to be sampled even when dealing with systems with a relatively large number of uncertain parameters. PCM is, therefore, computationally more efficient than MC method.

  4. Are Nonadjacent Collocations Processed Faster?

    ERIC Educational Resources Information Center

    Vilkaite, Laura

    2016-01-01

    Numerous studies have shown processing advantages for collocations, but they only investigated processing of adjacent collocations (e.g., "provide information"). However, in naturally occurring language, nonadjacent collocations ("provide" some of the "information") are equally, if not more frequent. This raises the…

  5. Rational-Spline Subroutines

    NASA Technical Reports Server (NTRS)

    Schiess, James R.; Kerr, Patricia A.; Smith, Olivia C.

    1988-01-01

    Smooth curves drawn among plotted data easily. Rational-Spline Approximation with Automatic Tension Adjustment algorithm leads to flexible, smooth representation of experimental data. "Tension" denotes mathematical analog of mechanical tension in spline or other mechanical curve-fitting tool, and "spline" as denotes mathematical generalization of tool. Program differs from usual spline under tension, allows user to specify different values of tension between adjacent pairs of knots rather than constant tension over entire range of data. Subroutines use automatic adjustment scheme that varies tension parameter for each interval until maximum deviation of spline from line joining knots less than or equal to amount specified by user. Procedure frees user from drudgery of adjusting individual tension parameters while still giving control over local behavior of spline.

  6. Fatigue Crack Detection at Gearbox Spline Component using Acoustic Emission Method

    DTIC Science & Technology

    2014-10-02

    analytical understanding of gearmesh stiffness change with the tooth crack (Chaari et al. 2009, Chen and Shao 2011). Debris monitoring does not require...that the AE method is not sensitive to gear wear while the method detects the tooth crack earlier than the vibration method. Typical parameters...11-22. Chaari, F., Fakhfakh, T. and Haddar, M. (2009). “Analytical Modelling of Spur Gear Tooth Crack and Influence on Gearmesh Stiffness

  7. A New Variational Method for Initial Value Problems, Using Piecewise Hermite Polynomial Spline Functions.

    DTIC Science & Technology

    1981-08-01

    obtained as follows. tb - tb I - yLydt + (cy’+yy)yI (22) to to 2R. Courant and D. Hilbert, " Methods of Mathematical Physics , Vol. I," Interscience...Courant and D. Hilbert, " Methods of Mathematical Physics , Vol. I," Interscience Publishers Inc., 1953, p. 278. 3. J. J. Wu, "Solutions to Initial Value

  8. A Study on the Phenomenon of Collocations: Methodology of Teaching English and German Collocations to Russian Students

    ERIC Educational Resources Information Center

    Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.

    2016-01-01

    Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…

  9. Gear Spline Coupling Program

    SciTech Connect

    Guo, Yi; Errichello, Robert

    2013-08-29

    An analytical model is developed to evaluate the design of a spline coupling. For a given torque and shaft misalignment, the model calculates the number of teeth in contact, tooth loads, stiffnesses, stresses, and safety factors. The analytic model provides essential spline coupling design and modeling information and could be easily integrated into gearbox design and simulation tools.

  10. Biomechanical Analysis with Cubic Spline Functions

    ERIC Educational Resources Information Center

    McLaughlin, Thomas M.; And Others

    1977-01-01

    Results of experimentation suggest that the cubic spline is a convenient and consistent method for providing an accurate description of displacement-time data and for obtaining the corresponding time derivatives. (MJB)

  11. Air-surface exchange of Hg0 measured by collocated micrometeorological and enclosure methods - Part 1: Data comparability and method characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2014-09-01

    Reliable quantification of air-biosphere exchange flux of elemental mercury vapor (Hg0) is crucial for understanding global biogeochemical cycle of mercury. However, there has not been a standard analytical protocol for flux quantification, and little attention has been devoted to characterize the temporal variability and comparability of fluxes measured by different methods. In this study, we deployed a collocated set of micro-meteorological (MM) and enclosure measurement systems to quantify Hg0 flux over bare soil and low standing crop in an agricultural field. The techniques include relaxed eddy accumulation (REA), modified Bowen-ratio (MBR), aerodynamic gradient (AGM) as well as dynamic flux chambers of traditional (TDFC) and novel (NDFC) designs. The five systems and their measured fluxes were cross-examined with respect to magnitude, temporal trend and sensitivity to environmental variables. Fluxes measured by the MM and DFC methods showed distinct temporal trends. The former exhibited a highly dynamic temporal variability while the latter had much gradual temporal features. The diurnal characteristics reflected the difference in the fundamental processes driving the measurements. The correlations between NDFC and TDFC fluxes and between MBR and AGM fluxes were significant (R > 0.8, p < 0.05), but the correlation between DFC and MM instantaneous fluxes were from weak to moderate (R = 0.1-0.5). Statistical analysis indicated that the median of turbulent fluxes estimated by the three independent MM-techniques were not significantly different. Cumulative flux measured by TDFC is considerably lower (42% of AGM and 31% of MBR fluxes) while those measured by NDFC, AGM and MBR were similar (< 10% difference). This implicates that the NDFC technique, which accounts for internal friction velocity, effectively bridged the gap in measured Hg0 flux compared to MM techniques. Cumulated flux measured by REA was ~60% higher than the gradient-based fluxes. Environmental

  12. Interactive natural image segmentation via spline regression.

    PubMed

    Xiang, Shiming; Nie, Feiping; Zhang, Chunxia; Zhang, Changshui

    2009-07-01

    This paper presents an interactive algorithm for segmentation of natural images. The task is formulated as a problem of spline regression, in which the spline is derived in Sobolev space and has a form of a combination of linear and Green's functions. Besides its nonlinear representation capability, one advantage of this spline in usage is that, once it has been constructed, no parameters need to be tuned to data. We define this spline on the user specified foreground and background pixels, and solve its parameters (the combination coefficients of functions) from a group of linear equations. To speed up spline construction, K-means clustering algorithm is employed to cluster the user specified pixels. By taking the cluster centers as representatives, this spline can be easily constructed. The foreground object is finally cut out from its background via spline interpolation. The computational complexity of the proposed algorithm is linear in the number of the pixels to be segmented. Experiments on diverse natural images, with comparison to existing algorithms, illustrate the validity of our method.

  13. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    SciTech Connect

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    and delete redundant knots based on the estimation of a weight associated with each basis vector. The overall algorithm iterates by inserting and deleting knots and end up with much fewer knots than pixels to represent the object, while the estimation error is within a certain tolerance. Thus, an efficient reconstruction can be obtained which significantly reduces the complexity of the problem. In this thesis, the adaptive B-Spline method is applied to a cross-well tomography problem. The problem comes from the application of finding underground pollution plumes. Cross-well tomography method is applied by placing arrays of electromagnetic transmitters and receivers along the boundaries of the interested region. By utilizing inverse scattering method, a linear inverse model is set up and furthermore the adaptive B-Spline method described above is applied. The simulation results show that the B-Spline method reduces the dimensional complexity by 90%, compared with that o f a pixel-based method, and decreases time complexity by 50% without significantly degrading the estimation.

  14. Twelfth degree spline with application to quadrature.

    PubMed

    Mohammed, P O; Hamasalh, F K

    2016-01-01

    In this paper existence and uniqueness of twelfth degree spline is proved with application to quadrature. This formula is in the class of splines of degree 12 and continuity order [Formula: see text] that matches the derivatives up to order 6 at the knots of a uniform partition. Some mistakes in the literature are pointed out and corrected. Numerical examples are given to illustrate the applicability and efficiency of the new method.

  15. Lagrange interpolation and modified cubic B-spline differential quadrature methods for solving hyperbolic partial differential equations with Dirichlet and Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Jiwari, Ram

    2015-08-01

    In this article, the author proposed two differential quadrature methods to find the approximate solution of one and two dimensional hyperbolic partial differential equations with Dirichlet and Neumann's boundary conditions. The methods are based on Lagrange interpolation and modified cubic B-splines respectively. The proposed methods reduced the hyperbolic problem into a system of second order ordinary differential equations in time variable. Then, the obtained system is changed into a system of first order ordinary differential equations and finally, SSP-RK3 scheme is used to solve the obtained system. The well known hyperbolic equations such as telegraph, Klein-Gordon, sine-Gordon, Dissipative non-linear wave, and Vander Pol type non-linear wave equations are solved to check the accuracy and efficiency of the proposed methods. The numerical results are shown in L∞ , RMS andL2 errors form.

  16. RATIONAL SPLINE SUBROUTINES

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1994-01-01

    Scientific data often contains random errors that make plotting and curve-fitting difficult. The Rational-Spline Approximation with Automatic Tension Adjustment algorithm lead to a flexible, smooth representation of experimental data. The user sets the conditions for each consecutive pair of knots:(knots are user-defined divisions in the data set) to apply no tension; to apply fixed tension; or to determine tension with a tension adjustment algorithm. The user also selects the number of knots, the knot abscissas, and the allowed maximum deviations from line segments. The selection of these quantities depends on the actual data and on the requirements of a particular application. This program differs from the usual spline under tension in that it allows the user to specify different tension values between each adjacent pair of knots rather than a constant tension over the entire data range. The subroutines use an automatic adjustment scheme that varies the tension parameter for each interval until the maximum deviation of the spline from the line joining the knots is less than or equal to a user-specified amount. This procedure frees the user from the drudgery of adjusting individual tension parameters while still giving control over the local behavior of the spline The Rational Spline program was written completely in FORTRAN for implementation on a CYBER 850 operating under NOS. It has a central memory requirement of approximately 1500 words. The program was released in 1988.

  17. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    SciTech Connect

    M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  18. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  19. Mercury vapor air-surface exchange measured by collocated micrometeorological and enclosure methods - Part II: Bias and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2015-05-01

    Dynamic flux chambers (DFCs) and micrometeorological (MM) methods are extensively deployed for gauging air-surface Hg0 gas exchange. However, a systematic evaluation of the precision of the contemporary Hg0 flux quantification methods is not available. In this study, the uncertainty in Hg0 flux measured by the relaxed eddy accumulation (REA) method, the aerodynamic gradient method (AGM), the modified Bowen ratio (MBR) method, as well as DFC of traditional (TDFC) and novel (NDFC) designs, are assessed using a robust data set from two field intercomparison campaigns. The absolute precision in Hg0 concentration difference (ΔC) measurements is estimated at 0.064 ng m-3 for the gradient-based MBR and AGM systems. For the REA system, the parameter is Hg0 concentration (C) dependent at 0.069 + 0.022C. During the campaigns, 57 and 62 % of the individual vertical gradient measurements are found to be significantly different from 0, while for the REA technique, the percentage of significant observations is lower. For the chambers, non-significant fluxes are confined to a few night-time periods with varying ambient Hg0 concentrations. Relative bias for DFC-derived fluxes is estimated to be ~ ±10, and ~ 85% of the flux bias is within ±2 ng m-2 h-1 in absolute terms. The DFC flux bias follows a diurnal cycle, which is largely affected by the forced temperature and irradiation bias in the chambers. Due to contrasting prevailing micrometeorological conditions, the relative uncertainty (median) in turbulent exchange parameters differs by nearly a factor of 2 between the campaigns, while that in ΔC measurement is fairly consistent. The estimated flux uncertainties for the triad of MM techniques are 16-27, 12-23 and 19-31% (interquartile range) for the AGM, MBR and REA methods, respectively. This study indicates that flux-gradient-based techniques (MBR and AGM) are preferable to REA in quantifying Hg0 flux over ecosystems with low vegetation height. A limitation of all Hg0 flux

  20. Mercury vapor air-surface exchange measured by collocated micrometeorological and enclosure methods - Part II: Bias and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2015-02-01

    Dynamic flux chambers (DFCs) and micrometeorological (MM) methods are extensively deployed for gauging air-surface Hg0 gas exchange. However, a systematic evaluation of the precision of the contemporary Hg0 flux quantification methods is not available. In this study, the uncertainty in Hg0 flux measured by relaxed eddy accumulation (REA) method, aerodynamic gradient method (AGM), modified Bowen-ratio (MBR) method, as well as DFC of traditional (TDFC) and novel (NDFC) designs is assessed using a robust data-set from two field intercomparison campaigns. The absolute precision in Hg0 concentration difference (Δ C) measurements is estimated at 0.064 ng m-3 for the gradient-based MBR and AGM system. For the REA system, the parameter is Hg0 concentration (C) dependent at 0.069+0.022C. 57 and 62% of the individual vertical gradient measurements were found to be significantly different from zero during the campaigns, while for the REA-technique the percentage of significant observations was lower. For the chambers, non-significant fluxes are confined to a few nighttime periods with varying ambient Hg0 concentration. Relative bias for DFC-derived fluxes is estimated to be ~ ±10%, and ~ 85% of the flux bias are within ±2 ng m-2 h-1 in absolute term. The DFC flux bias follows a diurnal cycle, which is largely dictated by temperature controls on the enclosed volume. Due to contrasting prevailing micrometeorological conditions, the relative uncertainty (median) in turbulent exchange parameters differs by nearly a factor of two between the campaigns, while that in Δ C measurements is fairly stable. The estimated flux uncertainties for the triad of MM-techniques are 16-27, 12-23 and 19-31% (interquartile range) for the AGM, MBR and REA method, respectively. This study indicates that flux-gradient based techniques (MBR and AGM) are preferable to REA in quantifying Hg0 flux over ecosystems with low vegetation height. A limitation of all Hg0 flux measurement systems investigated

  1. Smoothing spline ANOVA decomposition of arbitrary splines: an application to eye movements in reading.

    PubMed

    Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias

    2015-01-01

    The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.

  2. Complex Planar Splines.

    DTIC Science & Technology

    1981-05-01

    try todefine a complex planar spline by holomorphic elements like polynomials, then by the well known identity theorem (e.g. Diederich- Remmert [9, p...R. Remmert : Funktionentheorie I, Springer, Berlin, Heidelberg, New York, 1972, 246 p. 10 0. Lehto - K.I. Virtanen: Quasikonforme AbbildunQen, Springer

  3. Smoothing spline primordial power spectrum reconstruction

    SciTech Connect

    Sealfon, Carolyn; Verde, Licia; Jimenez, Raul

    2005-11-15

    We reconstruct the shape of the primordial power spectrum (PPS) using a smoothing spline. Our adapted smoothing spline technique provides a complementary method to existing efforts to search for smooth features in the PPS, such as a running spectral index. With this technique we find no significant indication with Wilkinson Microwave Anisotropy Probe first-year data that the PPS deviates from a Harrison-Zeldovich spectrum and no evidence for loss of power on large scales. We also examine the effect on the cosmological parameters of the additional PPS freedom. Smooth variations in the PPS are not significantly degenerate with other cosmological parameters, but the spline reconstruction greatly increases the errors on the optical depth and baryon fraction.

  4. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  5. General spline filters for discontinuous Galerkin solutions

    PubMed Central

    Peters, Jörg

    2015-01-01

    The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots. PMID:26594090

  6. General spline filters for discontinuous Galerkin solutions.

    PubMed

    Peters, Jörg

    2015-09-01

    The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots.

  7. Clinical Trials: Spline Modeling is Wonderful for Nonlinear Effects.

    PubMed

    Cleophas, Ton J

    2016-01-01

    Traditionally, nonlinear relationships like the smooth shapes of airplanes, boats, and motor cars were constructed from scale models using stretched thin wooden strips, otherwise called splines. In the past decades, mechanical spline methods have been replaced with their mathematical counterparts. The objective of the study was to study whether spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial and also to study whether it can detect patterns in a trial that are relevant but go unobserved with simpler regression models. A clinical trial assessing the effect of quantity of care on quality of care was used as an example. Spline curves consistent of 4 or 5 cubic functions were applied. SPSS statistical software was used for analysis. The spline curves of our data outperformed the traditional curves because (1) unlike the traditional curves, they did not miss the top quality of care given in either subgroup, (2) unlike the traditional curves, they, rightly, did not produce sinusoidal patterns, and (3) unlike the traditional curves, they provided a virtually 100% match of the original values. We conclude that (1) spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial; (2) spline modeling can detect patterns in a trial that are relevant but may go unobserved with simpler regression models; (3) in clinical research, spline modeling has great potential given the presence of many nonlinear effects in this field of research and given its sophisticated mathematical refinement to fit any nonlinear effect in the mostly accurate way; and (4) spline modeling should enable to improve making predictions from clinical research for the benefit of health decisions and health care. We hope that this brief introduction to spline modeling will stimulate clinical investigators to start using this wonderful method.

  8. Spline screw payload fastening system

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the

  9. Mr. Stockdale's Dictionary of Collocations.

    ERIC Educational Resources Information Center

    Stockdale, Joseph Gagen, III

    This dictionary of collocations was compiled by an English-as-a-Second-Language (ESL) teacher in Saudi Arabia who teaches adult, native speakers of Arabic. The dictionary is practical in teaching English because it helps to focus on everyday events and situations. The dictionary works as follows: the teacher looks up a word, such as…

  10. Interlanguage Development and Collocational Clash

    ERIC Educational Resources Information Center

    Shahheidaripour, Gholamabbass

    2000-01-01

    Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…

  11. K-matrix method with B-splines: σnell, βn and resonances in He photoionization below N = 4 threshold

    NASA Astrophysics Data System (ADS)

    Argenti, Luca; Moccia, Roberto

    2006-06-01

    A B-spline based K-matrix method has been implemented to investigate the photoionization of atoms with simple valence shells. With a particular choice of knots, the present method is able to reproduce all the essential features of the continuum wavefunctions including up to 20-25 resonant multiplets below each ionization threshold. A detailed study of the interval between the N = 3 and N = 4 thresholds where the state labelled [031]+5 (parabolic quantum numbers: [N1N2m]An), the first of a series converging to the higher N = 5 threshold, is known to fall, is presented. According to propensity rules this state cannot decay directly in the underlying continuum, but interacts strongly with the [021]+n series and appreciably with the [030]-n series. As a result all parameters of the two series are strongly modulated and, between 75.5 eV and 75.57 eV, partial cross section and asymmetry parameter patterns change dramatically.

  12. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  13. Single authentication: exposing weighted splining artifacts

    NASA Astrophysics Data System (ADS)

    Ciptasari, Rimba W.

    2016-05-01

    A common form of manipulation is to combine parts of the image fragment into another different image either to remove or blend the objects. Inspired by this situation, we propose a single authentication technique for detecting traces of weighted average splining technique. In this paper, we assume that image composite could be created by joining two images so that the edge between them is imperceptible. The weighted average technique is constructed from overlapped images so that it is possible to compute the gray level value of points within a transition zone. This approach works on the assumption that although splining process leaves the transition zone smoothly. They may, nevertheless, alter the underlying statistics of an image. In other words, it introduces specific correlation into the image. The proposed idea dealing with identifying these correlations is to generate an original model of both weighting function, left and right functions, as references to their synthetic models. The overall process of the authentication is divided into two main stages, which are pixel predictive coding and weighting function estimation. In the former stage, the set of intensity pairs {Il,Ir} is computed by exploiting pixel extrapolation technique. The least-squares estimation method is then employed to yield the weighted coefficients. We show the efficacy of the proposed scheme on revealing the splining artifacts. We believe that this is the first work that exposes the image splining artifact as evidence of digital tampering.

  14. Mathematical research on spline functions

    NASA Technical Reports Server (NTRS)

    Horner, J. M.

    1973-01-01

    One approach in spline functions is to grossly estimate the integrand in J and exactly solve the resulting problem. If the integrand in J is approximated by Y" squared, the resulting problem lends itself to exact solution, the familiar cubic spline. Another approach is to investigate various approximations to the integrand in J and attempt to solve the resulting problems. The results are described.

  15. Theory, computation, and application of exponential splines

    NASA Technical Reports Server (NTRS)

    Mccartin, B. J.

    1981-01-01

    A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.

  16. Fitting multidimensional splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  17. Analysis of crustal structure of Venus utilizing residual Line-of-Sight (LOS) gravity acceleration and surface topography data. A trial of global modeling of Venus gravity field using harmonic spline method

    NASA Technical Reports Server (NTRS)

    Fang, Ming; Bowin, Carl

    1992-01-01

    To construct Venus' gravity disturbance field (or gravity anomaly) with the spacecraft-observer line of site (LOS) acceleration perturbation data, both a global and a local approach can be used. The global approach, e.g., spherical harmonic coefficients, and the local approach, e.g., the integral operator method, based on geodetic techniques are generally not the same, so that they must be used separately for mapping long wavelength features and short wavelength features. Harmonic spline, as an interpolation and extrapolation technique, is intrinsically flexible to both global and local mapping of a potential field. Theoretically, it preserves the information of the potential field up to the bound by sampling theorem regardless of whether it is global or local mapping, and is never bothered with truncation errors. The improvement of harmonic spline methodology for global mapping is reported. New basis functions, a singular value decomposition (SVD) based modification to Parker & Shure's numerical procedure, and preliminary results are presented.

  18. Triple collocation: beyond three estimates and separation of structural/non-structural errors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the “multiple” collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is s...

  19. C1 Hermite shape preserving polynomial splines in R3

    NASA Astrophysics Data System (ADS)

    Gabrielides, Nikolaos C.

    2012-06-01

    The C 2 variable degree splines1-3 have been proven to be an efficient tool for solving the curve shape-preserving interpolation problem in two and three dimensions. Based on this representation, the current paper proposes a Hermite interpolation scheme, to construct C 1 shape-preserving splines of variable degree. After this, a slight modification of the method leads to a C 1 shape-preserving Hermite cubic spline. Both methods can easily be developed within a CAD system, since they compute directly (without iterations) the B-spline control polygon. They have been implemented and tested within the DNV Software CAD/CAE system GeniE. [Figure not available: see fulltext.

  20. Quadratic spline subroutine package

    USGS Publications Warehouse

    Rasmussen, Lowell A.

    1982-01-01

    A continuous piecewise quadratic function with continuous first derivative is devised for approximating a single-valued, but unknown, function represented by a set of discrete points. The quadratic is proposed as a treatment intermediate between using the angular (but reliable, easily constructed and manipulated) piecewise linear function and using the smoother (but occasionally erratic) cubic spline. Neither iteration nor the solution of a system of simultaneous equations is necessary to determining the coefficients. Several properties of the quadratic function are given. A set of five short FORTRAN subroutines is provided for generating the coefficients (QSC), finding function value and derivatives (QSY), integrating (QSI), finding extrema (QSE), and computing arc length and the curvature-squared integral (QSK). (USGS)

  1. Spline screw autochanger

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1993-06-01

    A captured nut member is located within a tool interface assembly and being actuated by a spline screw member driven by a robot end effector. The nut member lowers and rises depending upon the directional rotation of the coupling assembly. The captured nut member further includes two winged segments which project outwardly in diametrically opposite directions so as to engage and disengage a clamping surface in the form of a chamfered notch respectively provided on the upper surface of a pair of parallel forwardly extending arm members of a bifurcated tool stowage holster which is adapted to hold and store a robotic tool including its end effector interface when not in use. A forward and backward motion of the robot end effector operates to insert and remove the tool from the holster.

  2. Parametres de delimitation des collocations du francais courant (Parameters for Delimiting Collocations in Contemporary French).

    ERIC Educational Resources Information Center

    Bosse-Andrieu, J.; Mareschal, G.

    1998-01-01

    Discusses the definition of collocation, demonstrates that associative word combinations do form a continuum, and proposes some parameters to help delimit the scope of collocations in everyday contemporary French. (Author/VWL)

  3. Self-Aligning, Spline-Locking Fastener

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1992-01-01

    Self-aligning, spline-locking fastener is two-part mechanism operated by robot, using one tool and simple movements. Spline nut on springloaded screw passes through mating spline fitting. Operator turns screw until vertical driving surfaces on spline nut rest against corresponding surfaces of spline fitting. Nut rides upward, drawing pieces together. Used to join two parts of structure, to couple vehicles, or to mount payload in vehicle.

  4. Evaluation of assumptions in soil moisture triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  5. Density Deconvolution With EPI Splines

    DTIC Science & Technology

    2015-09-01

    to, three main categories: signal processing , image processing , and probability density estimation. Epi-spline technology has also been used in...refers to the unknown na- ture of the input signal. One major application of blind deconvolution algorithms is in image processing . In this field, an...literature, historical medical data, and a scenario in uncertainty quantification in fluid dynamics. Results show that deconvolution via epi-splines is

  6. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  7. Wavelets based on Hermite cubic splines

    NASA Astrophysics Data System (ADS)

    Cvejnová, Daniela; Černá, Dana; Finěk, Václav

    2016-06-01

    In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.

  8. Collocation points distributions for optimal spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Fumenti, Federico; Circi, Christian; Romagnoli, Daniele

    2013-03-01

    The method of direct collocation with nonlinear programming (DCNLP) is a powerful tool to solve optimal control problems (OCP). In this method the solution time history is approximated with piecewise polynomials, which are constructed using interpolation points deriving from the Jacobi polynomials. Among the Jacobi polynomials family, Legendre and Chebyshev polynomials are the most used, but there is no evidence that they offer the best performance with respect to other family members. By solving different OCPs with interpolation points not only taken within the Jacoby family, the behavior of the Jacobi polynomials in the optimization problems is discussed. This paper focuses on spacecraft trajectories optimization problems. In particular orbit transfers, interplanetary transfers and station keepings are considered.

  9. Semi-supervised classification via local spline regression.

    PubMed

    Xiang, Shiming; Nie, Feiping; Zhang, Changshui

    2010-11-01

    This paper presents local spline regression for semi-supervised classification. The core idea in our approach is to introduce splines developed in Sobolev space to map the data points directly to be class labels. The spline is composed of polynomials and Green's functions. It is smooth, nonlinear, and able to interpolate the scattered data points with high accuracy. Specifically, in each neighborhood, an optimal spline is estimated via regularized least squares regression. With this spline, each of the neighboring data points is mapped to be a class label. Then, the regularized loss is evaluated and further formulated in terms of class label vector. Finally, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on the labeled and unlabeled data. To achieve the goal of semi-supervised classification, an objective function is constructed by combining together the global loss of the local spline regressions and the squared errors of the class labels of the labeled data. In this way, a transductive classification algorithm is developed in which a globally optimal classification can be finally obtained. In the semi-supervised learning setting, the proposed algorithm is analyzed and addressed into the Laplacian regularization framework. Comparative classification experiments on many public data sets and applications to interactive image segmentation and image matting illustrate the validity of our method.

  10. Should We Teach EFL Students Collocations?

    ERIC Educational Resources Information Center

    Bahns, Jens; Eldaw, Moira

    1993-01-01

    German advanced English-as-a-foreign-language (EFL) students' productive knowledge of English collocations consisting of a verb and a noun were investigated in a translation task and a close task. Results suggest that EFL students should concentrate on collocations that cannot readily be paraphrased. The tasks are appended. (32 references)…

  11. Supporting Collocation Learning with a Digital Library

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…

  12. A Localized Tau Method PDE Solver

    NASA Technical Reports Server (NTRS)

    Cottam, Russell

    2002-01-01

    In this paper we present a new form of the collocation method that allows one to find very accurate solutions to time marching problems without the unwelcome appearance of Gibb's phenomenon oscillations. The basic method is applicable to any partial differential equation whose solution is a continuous, albeit possibly rapidly varying function. Discontinuous functions are dealt with by replacing the function in a small neighborhood of the discontinuity with a spline that smoothly connects the function segments on either side of the discontinuity. This will be demonstrated when the solution to the inviscid Burgers equation is discussed.

  13. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    SciTech Connect

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  14. Asymmetric spline surfaces - Characteristics and applications. [in high quality optical systems design

    NASA Technical Reports Server (NTRS)

    Stacy, J. E.

    1984-01-01

    Asymmetric spline surfaces appear useful for the design of high-quality general optical systems (systems without symmetries). A spline influence function defined as the actual surface resulting from a simple perturbation in the spline definition array shows that a subarea is independent of others four or more points away. Optimization methods presented in this paper are used to vary a reflective spline surface near the focal plane of a decentered Schmidt-Cassegrain to reduce rms spot radii by a factor of 3 across the field.

  15. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  16. Solution of two-dimensional problems of the statics of flexible shallow shells by spline approximation

    SciTech Connect

    Grigorenko, Ya.M.; Kryukov, N.N.; Ivanova, Yu.I.

    1995-10-01

    Spline functions have come into increasingly wide use recently in the solution of boundary-value problems of the theory of elasticity of plates and shells. This development stems from the advantages offered by spline approximations compared to other methods. Among the most important advantages are the following: (1) the behavior of the spline in the neighborhood of a point has no effect on the behavior of the spline as a whole; (2) spline interpolation converges well compared to polynomial interpolation; (3) algorithms for spline construction are simple and convenient to use. The use of spline functions to solve linear two-dimensional problems on the stress-strain state of shallow shells and plates that are rectangular in plan has proven their efficiency and made it possible to expand the range of problems that can be solved. The approach proposed in these investigations is based on reducing a linear two-dimensional problem to a unidimensional problem by the spline unidimensional problem by the method of discrete orthogonalization in the other coordinate direction. Such an approach makes it possible to account for local and edge effects in the stress state of plates and shells and obtain reliable solutions with complex boundary conditions. In the present study, we take the above approach, employing spline functions to solve linear problems, and use it to also solve geometrically nonlinear problems of the statics of shallow shells and plates with variable parameters.

  17. Maternal MCG Interference Cancellation Using Splined Independent Component Subtraction

    PubMed Central

    Yu, Suhong

    2011-01-01

    Signal distortion is commonly observed when using independent component analysis (ICA) to remove maternal cardiac interference from the fetal magnetocardiogram. This can be seen even in the most conservative case where only the independent components dominated by maternal interference are subtracted from the raw signal, a procedure we refer to as independent component subtraction (ICS). Distortion occurs when the subspaces of the fetal and maternal signals have appreciable overlap. To overcome this problem, we employed splining to remove the fetal signal from the maternal source component. The maternal source components were downsampled and then interpolated to their original sampling rate using a cubic spline. A key aspect of the splining procedure is that the maternal QRS complexes are downsampled much less than the rest of the maternal signal so that they are not distorted, despite their higher bandwidth. The splined maternal source components were projected back onto the magnetic field measurement space and then subtracted from the raw signal. The method was evaluated using data from 24 subjects. We compared the results of conventional, i.e., unsplined, ICS with our method, splined ICS, using matched filtering as a reference. Correlation and subjective assessment of the P-wave and QRS complex were used to assess the performance. Using ICS, we found that the P-wave was adversely affected in 7 of 24 (29%) subjects, all having correlations less than 0.8. Splined ICS showed negligible distortion and improved the signal fidelity to some extent in all subjects. We also demonstrated that maternal T-wave interference could be problematic when the fetal and maternal heartbeats were synchronous. In these instances, splined ICS was more effective than matched filtering. PMID:21712157

  18. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  19. L1 Splines with Locally Computed Coefficients

    DTIC Science & Technology

    2013-01-01

    Fang. Univariate Cubic L1 Interpolating Splines : Analytical Results for Linearity, Convexity and Oscillation on 5-PointWindows, Algorithms, (07 2010...0. doi: 10.3390/a3030276 07/21/2011 2.00 Lu Yu, Qingwei Jin, John E. Lavery, Shu-Cherng Fang. Univariate Cubic L1 Interpolating Splines : Spline ...Qingwei Jin, Lu Yu, John E. Lavery, Shu-Cherng Fang. Univariate cubic L1 interpolating splines basedon the first derivative and on 5-point windows

  20. Validation of significant wave height product from Envisat ASAR using triple collocation

    NASA Astrophysics Data System (ADS)

    Wang, H.; Shi, C. Y.; Zhu, J. H.; Huang, X. Q.; Chen, C. T.

    2014-03-01

    Nowadays, spaceborne Synthetic Aperture Radar (SAR) has become a powerful tool for providing significant wave height. Traditionally, validation of SAR derived ocean wave height has been carried out against buoy measurements or model outputs, which only yield a inter-comparison, but not an 'absolute' validation. In this study, the triple collocation error model has been introduced in the validation of Envisat ASAR level 2 data. Significant wave height data from ASAR were validated against in situ buoy data, and wave model hindcast results from WaveWatch III, covering a period of six years. The impact of the collocation distance on the error of ASAR wave height was discussed. From the triple collocation validation analysis, it is found that the error of Envisat ASAR significant wave height product is linear to the collocation distance, and decrease with the decreasing collocation distance. Using the linear regression fit method, the absolute error of Envisat ASAR wave height was obtained with zero collocation distance. The absolute Envisat ASAR wave height error of 0.49m is presented in deep and open ocean from this triple collocation validation work.

  1. Cubic spline functions for curve fitting

    NASA Technical Reports Server (NTRS)

    Young, J. D.

    1972-01-01

    FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.

  2. An Areal Isotropic Spline Filter for Surface Metrology.

    PubMed

    Zhang, Hao; Tong, Mingsi; Chu, Wei

    2015-01-01

    This paper deals with the application of the spline filter as an areal filter for surface metrology. A profile (2D) filter is often applied in orthogonal directions to yield an areal filter for a three-dimensional (3D) measurement. Unlike the Gaussian filter, the spline filter presents an anisotropic characteristic when used as an areal filter. This disadvantage hampers the wide application of spline filters for evaluation and analysis of areal surface topography. An approximation method is proposed in this paper to overcome the problem. In this method, a profile high-order spline filter serial is constructed to approximate the filtering characteristic of the Gaussian filter. Then an areal filter with isotropic characteristic is composed by implementing the profile spline filter in the orthogonal directions. It is demonstrated that the constructed areal filter has two important features for surface metrology: an isotropic amplitude characteristic and no end effects. Some examples of applying this method on simulated and practical surfaces are analyzed.

  3. Mining visual collocation patterns via self-supervised subspace learning.

    PubMed

    Yuan, Junsong; Wu, Ying

    2012-04-01

    Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.

  4. Simple adaptive cubic spline interpolation of fluorescence decay functions

    NASA Astrophysics Data System (ADS)

    Kuśba, J.; Czuper, A.

    2007-05-01

    Simple method allowing for adaptive cubic spline interpolation of fluorescence decay functions is proposed. In the first step of the method, the interpolated function is integrated using the known adaptive algorithm based on Newton-Cotes quadratures. It is shown that, in this step, application of the Simpson's rule provides the smallest number of calls of the interpolated function. In the second step of the method, a typical cubic spline approximation is used to find values of the interpolated function between the points evaluated in the first step.

  5. A kernel representation for exponential splines with global tension

    NASA Astrophysics Data System (ADS)

    Barendt, Sven; Fischer, Bernd; Modersitzki, Jan

    2009-02-01

    Interpolation is a key ingredient in many imaging routines. In this note, we present a thorough evaluation of an interpolation method based on exponential splines in tension. They are based on so-called tension parameters, which allow for a tuning of their properties. As it turns out, these interpolants have very many nice features, which are, however, not born out in the literature. We intend to close this gap. We present for the first time an analytic representation of their kernel which enables one to come up with a space and frequency domain analysis. It is shown that the exponential splines in tension, as a function of the tension parameter, bridging the gap between linear and cubic B-Spline interpolation. For example, with a certain tension parameter, one is able to suppress ringing artefacts in the interpolant. On the other hand, the analysis in the frequency domain shows that one derives a superior signal reconstruction quality as known from the cubic B-Spline interpolation, which, however, suffers from ringing artifacts. With the ability to offer a trade-off between opposing features of interpolation methods we advocate the use of the exponential spline in tension from a practical point of view and use the new kernel representation to qualify the trade-off.

  6. Connecting the Dots Parametrically: An Alternative to Cubic Splines.

    ERIC Educational Resources Information Center

    Hildebrand, Wilbur J.

    1990-01-01

    Discusses a method of cubic splines to determine a curve through a series of points and a second method for obtaining parametric equations for a smooth curve that passes through a sequence of points. Procedures for determining the curves and results of each of the methods are compared. (YP)

  7. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  8. A Christoffel function weighted least squares algorithm for collocation approximations

    SciTech Connect

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis to motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.

  9. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  10. Hartree-Fock-Roothaan energies and expectation values for the neutral atoms He to Uuo: The B-spline expansion method

    SciTech Connect

    Saito, Shiro L.

    2009-11-15

    The ground state energies and expectation values of atoms are given by Hartree-Fock-Roothaan calculations with one B-spline set. For the neutral atoms He to Uuo, the total energies, kinetic energies, potential energies, and virial ratios are tabulated. Our total energies are in excellent agreement with the highly accurate 10-digit numerical Hartree-Fock energies given by Koga and Thakkar [T. Koga, A.J. Thakkar, J. Phys. B 29 (1996) 2973]. The virial ratios are in complete agreement to within 12-digits of the exact value -2. Orbital energies, electron densities at the nucleus, electron-nucleus cusp ratios, and radial expectation values (n = 2, 1, -1, -2, -3) are also given.

  11. A Christoffel function weighted least squares algorithm for collocation approximations [The Christoffel least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  12. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  13. Analysis of Pairwise Preference Data Using Integrated B-SPLINES.

    ERIC Educational Resources Information Center

    Winsberg, Suzanne; Ramsay, James O.

    1981-01-01

    A general method of scaling pairwise preference data is presented that may be used without prior knowledge about the nature of the relationship between an observation and the process giving rise to it. The method involves a monotone transformation and is similar to the B-SPLINE approach. (Author/JKS)

  14. Bidirectional elastic image registration using B-spline affine transformation.

    PubMed

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C; Ma, Hongxia; Leader, Joseph; Kaminski, Naftali; Gur, David; Pu, Jiantao

    2014-06-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bidirectional instead of the traditional unidirectional objective/cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy.

  15. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  16. A cubic spline approximation for problems in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  17. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  18. An alternative local collocation strategy for high-convergence meshless PDE solutions, using radial basis functions

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Power, H.; Meng, C. Y.; Howard, D.; Cliffe, K. A.

    2013-12-01

    This work proposes an alternative decomposition for local scalable meshless RBF collocation. The proposed method operates on a dataset of scattered nodes that are placed within the solution domain and on the solution boundary, forming a small RBF collocation system around each internal node. Unlike other meshless local RBF formulations that are based on a generalised finite difference (RBF-FD) principle, in the proposed "finite collocation" method the solution of the PDE is driven entirely by collocation of PDE governing and boundary operators within the local systems. A sparse global collocation system is obtained not by enforcing the PDE governing operator, but by assembling the value of the field variable in terms of the field value at neighbouring nodes. In analogy to full-domain RBF collocation systems, communication between stencils occurs only over the stencil periphery, allowing the PDE governing operator to be collocated in an uninterrupted manner within the stencil interior. The local collocation of the PDE governing operator allows the method to operate on centred stencils in the presence of strong convective fields; the reconstruction weights assigned to nodes in the stencils being automatically adjusted to represent the flow of information as dictated by the problem physics. This "implicit upwinding" effect mitigates the need for ad-hoc upwinding stencils in convective dominant problems. Boundary conditions are also enforced within the local collocation systems, allowing arbitrary boundary operators to be imposed naturally within the solution construction. The performance of the method is assessed using a large number of numerical examples with two steady PDEs; the convection-diffusion equation, and the Lamé-Navier equations for linear elasticity. The method exhibits high-order convergence in each case tested (greater than sixth order), and the use of centred stencils is demonstrated for convective-dominant problems. In the case of linear elasticity

  19. Extensions of the Zwart-Powell box spline for volumetric data reconstruction on the cartesian lattice.

    PubMed

    Entezari, Alireza; Möller, Torsten

    2006-01-01

    In this article we propose a box spline and its variants for reconstructing volumetric data sampled on the Cartesian lattice. In particular we present a tri-variate box spline reconstruction kernel that is superior to tensor product reconstruction schemes in terms of recovering the proper Cartesian spectrum of the underlying function. This box spline produces a C2 reconstruction that can be considered as a three dimensional extension of the well known Zwart-Powell element in 2D. While its smoothness and approximation power are equivalent to those of the tri-cubic B-spline, we illustrate the superiority of this reconstruction on functions sampled on the Cartesian lattice and contrast it to tensor product B-splines. Our construction is validated through a Fourier domain analysis of the reconstruction behavior of this box spline. Moreover, we present a stable method for evaluation of this box spline by means of a decomposition. Through a convolution, this decomposition reduces the problem to evaluation of a four directional box spline that we previously published in its explicit closed form.

  20. B-LUT: Fast and low memory B-spline image interpolation.

    PubMed

    Sarrut, David; Vandemeulebroucke, Jef

    2010-08-01

    We propose a fast alternative to B-splines in image processing based on an approximate calculation using precomputed B-spline weights. During B-spline indirect transformation, these weights are efficiently retrieved in a nearest-neighbor fashion from a look-up table, greatly reducing overall computation time. Depending on the application, calculating a B-spline using a look-up table, called B-LUT, will result in an exact or approximate B-spline calculation. In case of the latter the obtained accuracy can be controlled by the user. The method is applicable to a wide range of B-spline applications and has very low memory requirements compared to other proposed accelerations. The performance of the proposed B-LUTs was compared to conventional B-splines as implemented in the popular ITK toolkit for the general case of image intensity interpolation. Experiments illustrated that highly accurate B-spline approximation can be obtained all while computation time is reduced with a factor of 5-6. The B-LUT source code, compatible with the ITK toolkit, has been made freely available to the community.

  1. Spline screw multiple rotations mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling two bodies together and for transmitting torque from one body to another with mechanical timing and sequencing is reported. The mechanical timing and sequencing is handled so that the following criteria are met: (1) the bodies are handled in a safe manner and nothing floats loose in space, (2) electrical connectors are engaged as long as possible so that the internal processes can be monitored throughout by sensors, and (3) electrical and mechanical power and signals are coupled. The first body has a splined driver for providing the input torque. The second body has a threaded drive member capable of rotation and limited translation. The embedded drive member will mate with and fasten to the splined driver. The second body has an embedded bevel gear member capable of rotation and limited translation. This bevel gear member is coaxial with the threaded drive member. A compression spring provides a preload on the rotating threaded member, and a thrust bearing is used for limiting the translation of the bevel gear member so that when the bevel gear member reaches the upward limit of its translation the two bodies are fully coupled and the bevel gear member then rotates due to the input torque transmitted from the splined driver through the threaded drive member to the bevel gear member. An output bevel gear with an attached output drive shaft is embedded in the second body and meshes with the threaded rotating bevel gear member to transmit the input torque to the output drive shaft.

  2. Stochastic dynamic models and Chebyshev splines

    PubMed Central

    Fan, Ruzong; Zhu, Bin; Wang, Yuedong

    2015-01-01

    In this article, we establish a connection between a stochastic dynamic model (SDM) driven by a linear stochastic differential equation (SDE) and a Chebyshev spline, which enables researchers to borrow strength across fields both theoretically and numerically. We construct a differential operator for the penalty function and develop a reproducing kernel Hilbert space (RKHS) induced by the SDM and the Chebyshev spline. The general form of the linear SDE allows us to extend the well-known connection between an integrated Brownian motion and a polynomial spline to a connection between more complex diffusion processes and Chebyshev splines. One interesting special case is connection between an integrated Ornstein–Uhlenbeck process and an exponential spline. We use two real data sets to illustrate the integrated Ornstein–Uhlenbeck process model and exponential spline model and show their estimates are almost identical. PMID:26045632

  3. Improved Spline Coupling For Robotic Docking

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1995-01-01

    Robotic docking mechanism like one described in "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430) improved. Spline coupling redesigned to reduce stresses, enchancing performance and safety of mechanism. Does not involve significant increase in size. Convex spherical surfaces on spline driver mate with concave spherical surfaces on undersides on splines in receptacle. Spherical surfaces distribute load stresses better and tolerate misalignments better than flat and otherwise shaped surfaces.

  4. Algorithms for spline and other approximations to functions and data

    NASA Astrophysics Data System (ADS)

    Phillips, G. M.; Taylor, P. J.

    1992-12-01

    A succinct introduction to splines, explaining how and why B-splines are used as a basis and how cubic and quadratic splines may be constructed, is followed by brief account of Hermite interpolation and Padé approximations.

  5. Triangular bubble spline surfaces

    PubMed Central

    Kapl, Mario; Byrtus, Marek; Jüttler, Bert

    2011-01-01

    We present a new method for generating a Gn-surface from a triangular network of compatible surface strips. The compatible surface strips are given by a network of polynomial curves with an associated implicitly defined surface, which fulfill certain compatibility conditions. Our construction is based on a new concept, called bubble patches, to represent the single surface patches. The compatible surface strips provide a simple Gn-condition between two neighboring bubble patches, which are used to construct surface patches, connected with Gn-continuity. For n≤2, we describe the obtained Gn-condition in detail. It can be generalized to any n≥3. The construction of a single surface patch is based on Gordon–Coons interpolation for triangles. Our method is a simple local construction scheme, which works uniformly for vertices of arbitrary valency. The resulting surface is a piecewise rational surface, which interpolates the given network of polynomial curves. Several examples of G0, G1 and G2-surfaces are presented, which have been generated by using our method. The obtained surfaces are visualized with reflection lines to demonstrate the order of smoothness. PMID:22267872

  6. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    SciTech Connect

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.; Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.

    2008-07-14

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensity that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting

  7. Gauging the Effects of Exercises on Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart

    2014-01-01

    Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials…

  8. Corpus-Based versus Traditional Learning of Collocations

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2015-01-01

    One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…

  9. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  10. Flexible coiled spline securely joins mating cylinders

    NASA Technical Reports Server (NTRS)

    Coppernol, R. W.

    1966-01-01

    Mating cylindrical members are joined by spline to form an integral structure. The spline is made of tightly coiled, high tensile-strength steel spiral wire that fits a groove between the mating members. It provides a continuous bearing surface for axial thrust between the members.

  11. Multicategorical Spline Model for Item Response Theory.

    ERIC Educational Resources Information Center

    Abrahamowicz, Michal; Ramsay, James O.

    1992-01-01

    A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)

  12. A fully spectral collocation approximation for multi-dimensional fractional Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.; Abdelkawy, M. A.

    2015-08-01

    A shifted Legendre collocation method in two consecutive steps is developed and analyzed to numerically solve one- and two-dimensional time fractional Schrödinger equations (TFSEs) subject to initial-boundary and non-local conditions. The first step depends mainly on shifted Legendre Gauss-Lobatto collocation (SL-GL-C) method for spatial discretization; an expansion in a series of shifted Legendre polynomials for the approximate solution and its spatial derivatives occurring in the TFSE is investigated. In addition, the Legendre-Gauss-Lobatto quadrature rule is established to treat the nonlocal conservation conditions. Thereby, the expansion coefficients are then determined by reducing the TFSE with its nonlocal conditions to a system of fractional differential equations (SFDEs) for these coefficients. The second step is to propose a shifted Legendre Gauss-Radau collocation (SL-GR-C) scheme, for temporal discretization, to reduce such system into a system of algebraic equations which is far easier to be solved. The proposed collocation scheme, both in temporal and spatial discretizations, is successfully extended to solve the two-dimensional TFSE. Numerical results are carried out to confirm the spectral accuracy and efficiency of the proposed algorithms. By selecting relatively limited Legendre Gauss-Lobatto and Gauss-Radau collocation nodes, we are able to get very accurate approximations, demonstrating the utility and high accuracy of the new approach over other numerical methods.

  13. A space-time collocation scheme for modified anomalous subdiffusion and nonlinear superdiffusion equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.

    2016-01-01

    This paper reports a new spectral collocation technique for solving time-space modified anomalous subdiffusion equation with a nonlinear source term subject to Dirichlet and Neumann boundary conditions. This model equation governs the evolution for the probability density function that describes anomalously diffusing particles. Anomalous diffusion is ubiquitous in physical and biological systems where trapping and binding of particles can occur. A space-time Jacobi collocation scheme is investigated for solving such problem. The main advantage of the proposed scheme is that, the shifted Jacobi Gauss-Lobatto collocation and shifted Jacobi Gauss-Radau collocation approximations are employed for spatial and temporal discretizations, respectively. Thereby, the problem is successfully reduced to a system of algebraic equations. The numerical results obtained by this algorithm have been compared with various numerical methods in order to demonstrate the high accuracy and efficiency of the proposed method. Indeed, for relatively limited number of Gauss-Lobatto and Gauss-Radau collocation nodes imposed, the absolute error in our numerical solutions is sufficiently small. The results have been compared with other techniques in order to demonstrate the high accuracy and efficiency of the proposed method.

  14. Radial spline assembly for antifriction bearings

    NASA Technical Reports Server (NTRS)

    Moore, Jerry H. (Inventor)

    1993-01-01

    An outer race carrier is constructed for receiving an outer race of an antifriction bearing assembly. The carrier in turn is slidably fitted in an opening of a support wall to accommodate slight axial movements of a shaft. A plurality of longitudinal splines on the carrier are disposed to be fitted into matching slots in the opening. A deadband gap is provided between sides of the splines and slots, with a radial gap at ends of the splines and slots and a gap between the splines and slots sized larger than the deadband gap. With this construction, operational distortions (slope) of the support wall are accommodated by the larger radial gaps while the deadband gaps maintain a relatively high springrate of the housing. Additionally, side loads applied to the shaft are distributed between sides of the splines and slots, distributing such loads over a larger surface area than a race carrier of the prior art.

  15. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  16. Combined Spline and B-spline for an improved automatic skin lesion segmentation in dermoscopic images using optimal color channel.

    PubMed

    Abbas, A A; Guo, X; Tan, W H; Jalab, H A

    2014-08-01

    In a computerized image analysis environment, the irregularity of a lesion border has been used to differentiate between malignant melanoma and other pigmented skin lesions. The accuracy of the automated lesion border detection is a significant step towards accurate classification at a later stage. In this paper, we propose the use of a combined Spline and B-spline in order to enhance the quality of dermoscopic images before segmentation. In this paper, morphological operations and median filter were used first to remove noise from the original image during pre-processing. Then we proceeded to adjust image RGB values to the optimal color channel (green channel). The combined Spline and B-spline method was subsequently adopted to enhance the image before segmentation. The lesion segmentation was completed based on threshold value empirically obtained using the optimal color channel. Finally, morphological operations were utilized to merge the smaller regions with the main lesion region. Improvement on the average segmentation accuracy was observed in the experimental results conducted on 70 dermoscopic images. The average accuracy of segmentation achieved in this paper was 97.21 % (where, the average sensitivity and specificity were 94 % and 98.05 % respectively).

  17. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  18. Registration of multiple image sets with thin-plate spline

    NASA Astrophysics Data System (ADS)

    He, Liang; Houk, James C.

    1994-09-01

    A thin-plate spline method for spatial warping was used to register multiple image sets during 3D reconstruction of histological sections. In a neuroanatomical study, the same labeling method was applied to several turtle brains. Each case produced a set of microscopic sections. Spatial warping was employed to map data sets from multiple cases onto a template coordinate system. This technique enabled us to produce an anatomical reconstruction of a neural network that controls limb movement.

  19. Registration of sliding objects using direction dependent B-splines decomposition

    NASA Astrophysics Data System (ADS)

    Delmon, V.; Rit, S.; Pinho, R.; Sarrut, D.

    2013-03-01

    Sliding motion is a challenge for deformable image registration because it leads to discontinuities in the sought deformation. In this paper, we present a method to handle sliding motion using multiple B-spline transforms. The proposed method decomposes the sought deformation into sliding regions to allow discontinuities at their interfaces, but prevents unrealistic solutions by forcing those interfaces to match. The method was evaluated on 16 lung cancer patients against a single B-spline transform approach and a multi B-spline transforms approach without the sliding constraint at the interface. The target registration error (TRE) was significantly lower with the proposed method (TRE = 1.5 mm) than with the single B-spline approach (TRE = 3.7 mm) and was comparable to the multi B-spline approach without the sliding constraint (TRE = 1.4 mm). The proposed method was also more accurate along region interfaces, with 37% less gaps and overlaps when compared to the multi B-spline transforms without the sliding constraint. This work was presented in part at the 4th International Workshop on Pulmonary Image Analysis during the Medical Image Computing and Computer Assisted Intervention (MICCAI) in Toronto, Canada (2011).

  20. On Collocation Schemes for Quasilinear Singularly Perturbed Boundary Value Problems.

    DTIC Science & Technology

    1983-02-01

    Approvod for public roIIa8 ELECTE0~~~istributiom unlimited MY0618 C= Sponsored by E U. S. Army Research Office and National Science Foundation P . 0. Box...Gauss, Radau and Lobatto-type.) The standard theory for discretization methods for general grids is not applicable unless the maximal stepsize is smaller...solution (y(- (y-,y )T),z) of (1.1) we construct a vector-spline function (py, p .), py- ( p ,pT) which satisfies: Py y a) P . 0) and py + re polynomial

  1. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  2. Adaptive solution of the biharmonic problem with shortly supported cubic spline-wavelets

    NASA Astrophysics Data System (ADS)

    Černá, Dana; Finěk, Václav

    2012-09-01

    In our contribution, we design a cubic spline-wavelet basis on the interval. The basis functions have small support and wavelets have vanishing moments. We show that stiffness matrices arising from discretization of the two-dimensional biharmonic problem using a constructed wavelet basis have uniformly bounded condition numbers and these condition numbers are very small. We compare quantitative behavior of adaptive wavelet method with a constructed basis and other cubic spline-wavelet bases, and show the superiority of our construction.

  3. Data approximation using a blending type spline construction

    SciTech Connect

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.

  4. Heterogeneous modeling of medical image data using B-spline functions.

    PubMed

    Grove, Olya; Rajab, Khairan; Les Piegl, A

    2012-10-01

    Biomedical data visualization and modeling rely predominately on manual processing and utilization of voxel- and facet-based homogeneous models. Biological structures are naturally heterogeneous and it is important to incorporate properties, such as material composition, size and shape, into the modeling process. A method to approximate image density data with a continuous B-spline surface is presented. The proposed approach generates a density point cloud, based on medical image data to reproduce heterogeneity across the image, through point densities. The density point cloud is ordered and approximated with a set of B-spline curves. A B-spline surface is lofted through the cross-sectional B-spline curves preserving the heterogeneity of the point cloud dataset. Preliminary results indicate that the proposed methodology produces a mathematical representation capable of capturing and preserving density variations with high fidelity.

  5. Conformal interpolating algorithm based on B-spline for aspheric ultra-precision machining

    NASA Astrophysics Data System (ADS)

    Li, Chenggui; Sun, Dan; Wang, Min

    2006-02-01

    Numeric control machining and on-line compensation for aspheric surface are key techniques for ultra-precision machining. In this paper, conformal cubic B-spline interpolating curve is first applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal B-spline interpolation, comparison was made between linear and circular interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by B-spline is higher than by line and by circle arc. The algorithm is benefit to increasing the surface form precision of workpiece during ultra-precision machining.

  6. Spline-based procedures for dose-finding studies with active control

    PubMed Central

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-01

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931

  7. Spline-based procedures for dose-finding studies with active control.

    PubMed

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-30

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose-response relationship and to find the smallest target dose concentration d(*), which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose-response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose-response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs.

  8. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  9. A stochastic collocation approach for efficient integrated gear health prognosis

    NASA Astrophysics Data System (ADS)

    Zhao, Fuqiong; Tian, Zhigang; Zeng, Yong

    2013-08-01

    Uncertainty quantification in damage growth is critical in equipment health prognosis and condition based maintenance. Integrated health prognostics has recently drawn growing attention due to its capability to produce more accurate predictions through integrating physical models and real-time condition monitoring data. In the existing literature, simulation is commonly used to account for the uncertainty in prognostics, which is inefficient. In this paper, instead of using simulation, a stochastic collocation approach is developed for efficient integrated gear health prognosis. Based on generalized polynomial chaos expansion, the approach is utilized to evaluate the uncertainty in gear remaining useful life prediction as well as the likelihood function in Bayesian inference. The collected condition monitoring data are incorporated into prognostics via Bayesian inference to update the distributions of uncertainties at given inspection times. Accordingly, the distribution of the remaining useful life is updated. Compared to conventional simulation methods, the stochastic collocation approach is much more efficient, and is capable of dealing with high dimensional probability space. An example is used to demonstrate the effectiveness and efficiency of the proposed approach.

  10. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  11. High-frequency health data and spline functions.

    PubMed

    Martín-Rodríguez, Gloria; Murillo-Fort, Carlos

    2005-03-30

    Seasonal variations are highly relevant for health service organization. In general, short run movements of medical magnitudes are important features for managers in this field to make adequate decisions. Thus, the analysis of the seasonal pattern in high-frequency health data is an appealing task. The aim of this paper is to propose procedures that allow the analysis of the seasonal component in this kind of data by means of spline functions embedded into a structural model. In the proposed method, useful adaptions of the traditional spline formulation are developed, and the resulting procedures are capable of capturing periodic variations, whether deterministic or stochastic, in a parsimonious way. Finally, these methodological tools are applied to a series of daily emergency service demand in order to capture simultaneous seasonal variations in which periods are different.

  12. Hermite cubic spline multi-wavelets on the cube

    NASA Astrophysics Data System (ADS)

    Cvejnová, Daniela; Černá, Dana; Finěk, Václav

    2015-11-01

    In 2000, W. Dahmen et al. proposed a construction of Hermite cubic spline multi-wavelets adapted to the interval [0, 1]. Later, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet basis with respect to which both the mass and stiffness matrices are sparse in the sense that the number of non-zero elements in each column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions, use an anisotropic tensor product to obtain bases on the cube [0, 1]3, and compare their condition numbers.

  13. Analysis of harmonic spline gravity models for Venus and Mars

    NASA Technical Reports Server (NTRS)

    Bowin, Carl

    1986-01-01

    Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.

  14. ANALYSIS ON CENSORED QUANTILE RESIDUAL LIFE MODEL VIA SPLINE SMOOTHING.

    PubMed

    Ma, Yanyuan; Wei, Ying

    2012-01-01

    We propose a general class of quantile residual life models, where a specific quantile of the residual life time, conditional on an individual has survived up to time t, is a function of certain covariates with their coefficients varying over time. The varying coefficients are assumed to be smooth unspecified functions of t. We propose to estimate the coefficient functions using spline approximation. Incorporating the spline representation directly into a set of unbiased estimating equations, we obtain a one-step estimation procedure, and we show that this leads to a uniformly consistent estimator. To obtain further computational simplification, we propose a two-step estimation approach in which we estimate the coefficients on a series of time points first, and follow this with spline smoothing. We compare the two methods in terms of their asymptotic efficiency and computational complexity. We further develop inference tools to test the significance of the covariate effect on residual life. The finite sample performance of the estimation and testing procedures are further illustrated through numerical experiments. We also apply the methods to a data set from a neurological study.

  15. Efficient spatial and temporal representations of global ionosphere maps over Japan using B-spline wavelets

    NASA Astrophysics Data System (ADS)

    Mautz, R.; Ping, J.; Heki, K.; Schaffrin, B.; Shum, C.; Potts, L.

    2005-05-01

    Wavelet expansion has been demonstrated to be suitable for the representation of spatial functions. Here we propose the so-called B-spline wavelets to represent spatial time-series of GPS-derived global ionosphere maps (GIMs) of the vertical total electron content (TEC) from the Earth’s surface to the mean altitudes of GPS satellites, over Japan. The scalar-valued B-spline wavelets can be defined in a two-dimensional, but not necessarily planar, domain. Generated by a sequence of knots, different degrees of B-splines can be implemented: degree 1 represents the Haar wavelet; degree 2, the linear B-spline wavelet, or degree 4, the cubic B-spline wavelet. A non-uniform version of these wavelets allows us to handle data on a bounded domain without any edge effects. B-splines are easily extended with great computational efficiency to domains of arbitrary dimensions, while preserving their properties. This generalization employs tensor products of B-splines, defined as linear superposition of products of univariate B-splines in different directions. The data and model may be identical at the locations of the data points if the number of wavelet coefficients is equal to the number of grid points. In addition, data compression is made efficient by eliminating the wavelet coefficients with negligible magnitudes, thereby reducing the observational noise. We applied the developed methodology to the representation of the spatial and temporal variations of GIM from an extremely dense GPS network, the GPS Earth Observation Network (GEONET) in Japan. Since the sampling of the TEC is registered regularly in time, we use a two-dimensional B-spline wavelet representation in space and a one-dimensional spline interpolation in time. Over the Japan region, the B-spline wavelet method can overcome the problem of bias for the spherical harmonic model at the boundary, caused by the non-compact support. The hierarchical decomposition not only allows an inexpensive calculation, but also

  16. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images

    PubMed Central

    Wang, Yangping; Wang, Song

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653

  17. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    PubMed

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  18. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.

    PubMed

    Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).

  19. Incorporating Corpus Technology to Facilitate Learning of English Collocations in a Thai University EFL Writing Course

    ERIC Educational Resources Information Center

    Chatpunnarangsee, Kwanjira

    2013-01-01

    The purpose of this study is to explore ways of incorporating web-based concordancers for the purpose of teaching English collocations. A mixed-methods design utilizing a case study strategy was employed to uncover four specific dimensions of corpus use by twenty-four students in two classroom sections of a writing course at a university in…

  20. Using spline-enhanced ordinary differential equations for PK/PD model development.

    PubMed

    Wang, Yi; Eskridge, Kent; Zhang, Shunpu; Wang, Dong

    2008-10-01

    A spline-enhanced ordinary differential equation (ODE) method is proposed for developing a proper parametric kinetic ODE model and is shown to be a useful approach to PK/PD model development. The new method differs substantially from a previously proposed model development approach using a stochastic differential equation (SDE)-based method. In the SDE-based method, a Gaussian diffusion term is introduced into an ODE to quantify the system noise. In our proposed method, we assume an ODE system with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function vector that is estimated using penalized splines. B(t) is used to construct a quantitative measure of model uncertainty useful for finding the proper model structure for a given data set. By means of two examples with simulated data, we demonstrate that the spline-enhanced ODE method can provide model diagnostics and serve as a basis for systematic model development similar to the SDE-based method. We compare and highlight the differences between the SDE-based and the spline-enhanced ODE methods of model development. We conclude that the spline-enhanced ODE method can be useful for PK/PD modeling since it is based on a relatively uncomplicated estimation algorithm which can be implemented with readily available software, provides numerically stable, robust estimation for many models, is distribution-free and allows for identification and accommodation of model deficiencies due to model misspecification.

  1. A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohamed A.; Hafez, Ramy M.

    2014-02-01

    This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.

  2. Spline trigonometric bases and their properties

    SciTech Connect

    Strelkov, N A

    2001-08-31

    A family of pairs of biorthonormal systems is constructed such that for each p element of (1,{infinity}) one of these systems is a basis in the space L{sub p}(a,b), while the other is the dual basis in L{sub q}(a,b) (here 1/p+1/q=1). The functions in the first system are products of trigonometric and algebraic polynomials; the functions in the second are products of trigonometric polynomials and the derivatives of B-splines. The asymptotic behaviour of the Lebesgue functions of the constructed systems is investigated. In particular, it is shown that the dominant terms of pointwise asymptotic expansions for the Lebesgue functions have everywhere (except at certain singular points) the form 4/{pi}{sup 2} ln n (that is, the same as in the case of an orthonormal trigonometric system). Interpolation representations with multiple nodes for entire functions of exponential type {sigma} are obtained. These formulae involve a uniform grid; however, by contrast with Kotel'nikov's theorem, where the mesh of the grid is {pi}/{sigma} and decreases as the type of the entire function increases, in the representations obtained the nodes of interpolation can be kept independent of {sigma}, and their multiplicity increases as the type of the interpolated function increases. One possible application of such representations (particularly, their multidimensional analogues) is an effective construction of asymptotically optimal approximation methods by means of scaling and argument shifts of a fixed function (wavelets, grid projection methods, and so on)

  3. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State...

  4. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State...

  5. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State...

  6. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State...

  7. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State...

  8. Relative orbit control of collocated geostationary spacecraft

    NASA Astrophysics Data System (ADS)

    Rausch, Raoul R.

    A relative orbit control concept for collocated geostationary spacecraft is presented. One chief spacecraft, controlled from the ground, is responsible for the orbit determination and control of the remaining vehicles. Any orbit relative to the chief is described in terms of equinoctial orbit element differences and a linear mapping is employed for quick transformation from relative orbit measurements to orbit element differences. It is demonstrated that the concept is well-suited for spacecraft that are collocated using eccentricity-inclination vector separation and this formulation still allows for the continued use of well established and currently employed stationkeeping schemes, such as the Sun-pointing-perigee strategy. The relative approach allows to take determinisitc thruster cross-coupling effects in the computation of stationkeeping corrections into account. The control cost for the proposed concept is comparable to ground-based stationkeeping. A relative line-of-sight constraint between spacecraft separated in longitude is also considered and an algorithm is developed to provide enforcement options. The proposed on-board control approach maintains the deputy spacecraft relative orbit, is competitive in terms of propellant consumption, allows enforcement of a relative line-of-sight constraint and offers increased autonomy and flexibility for future missions.

  9. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  10. Electron impact excitation of N3+ using the B-spline R-matrix method: importance of the target structure description and the size of the close-coupling expansion

    NASA Astrophysics Data System (ADS)

    Fernández-Menchero, L.; Zatsarinny, O.; Bartschat, K.

    2017-03-01

    There are major discrepancies between recent intermediate coupling frame transformation (ICFT) and Dirac atomic R-matrix code (DARC) calculations (Fernández-Menchero et al 2014 Astron. Astrophys. 566 A104; Aggarwal et al 2016 Mon. Not. R. Astron. Soc. 461 3997) regarding electron-impact excitation rates for transitions in several Be-like ions, as well as claims that the DARC calculations are much more accurate and the ICFT results might even be wrong. To identify possible reasons for these discrepancies and to estimate the accuracy of the various results, we carried out independent B-spline R-matrix calculations for electron-impact excitation of the Be-like ion {{{N}}}3+. Our close-coupling (CC) expansions contain the same target states (238 levels overall) as the previous ICFT and DARC calculations, but the representation of the target wave functions is completely different. We find close agreement among all calculations for the strong transitions between low-lying states, whereas there remain serious discrepancies for the weak transitions as well as for transitions to highly excited states. The differences in the final results for the collision strengths are mainly due to differences in the structure description, specifically the inclusion of correlation effects, rather than the treatment of relativistic effects or problems with the validity of the three methods to describe the collision. Hence there is no indication that one approach is superior to another, until the convergence of both the target configuration and the CC expansions have been fully established.

  11. Design of a mechanical test to characterize sheet metals - Optimization using B-splines or cubic splines

    NASA Astrophysics Data System (ADS)

    Souto, Nelson; Thuillier, Sandrine; Andrade-Campos, A.

    2016-10-01

    Nowadays, full-field measurement methods are largely used to acquire the strain field developed by heterogeneous mechanical tests. Recent material parameters identification strategies based on a single heterogeneous test have been proposed considering that an inhomogeneous strain field can lead to a more complete mechanical characterization of the sheet metals. The purpose of this work is the design of a heterogeneous test promoting an enhanced mechanical behavior characterization of thin metallic sheets, under several strain paths and strain amplitudes. To achieve this goal, a design optimization strategy finding the appropriate specimen shape of the heterogeneous test by using either B-Splines or cubic splines was developed. The influence of using approximation or interpolation curves, respectively, was investigated in order to determine the most effective approach for achieving a better shape design. The optimization process is guided by an indicator criterion which evaluates, quantitatively, the strain field information provided by the mechanical test. Moreover, the design of the heterogeneous test is based on the resemblance with the experimental reality, since a rigid tool leading to uniaxial loading path is used for applying the displacement in a similar way as universal standard testing machines. The results obtained reveal that the optimization strategy using B-Splines curve approximation led to a heterogeneous test providing larger strain field information for characterizing the mechanical behavior of sheet metals.

  12. The Effect of Grouping and Presenting Collocations on Retention

    ERIC Educational Resources Information Center

    Akpinar, Kadriye Dilek; Bardakçi, Mehmet

    2015-01-01

    The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types…

  13. Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks

    ERIC Educational Resources Information Center

    Menon, Sujatha; Mukundan, Jayakaran

    2012-01-01

    This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…

  14. A Study of Strategy Use in Producing Lexical Collocations.

    ERIC Educational Resources Information Center

    Liu, Candi Chen-Pin

    This study examined strategy use in producing lexical collocations among freshman English majors at the Chinese Culture University. Divided into two groups by English writing proficiency, students completed three tasks: a collocation test, an optimal revision task, and a task-based structured questionnaire regarding their actions and mental…

  15. Profiling the Collocation Use in ELT Textbooks and Learner Writing

    ERIC Educational Resources Information Center

    Tsai, Kuei-Ju

    2015-01-01

    The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…

  16. Achieving high data reduction with integral cubic B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.

    1993-01-01

    During geometry processing, tangent directions at the data points are frequently readily available from the computation process that generates the points. It is desirable to utilize this information to improve the accuracy of curve fitting and to improve data reduction. This paper presents a curve fitting method which utilizes both position and tangent direction data. This method produces G(exp 1) non-rational B-spline curves. From the examples, the method demonstrates very good data reduction rates while maintaining high accuracy in both position and tangent direction.

  17. On the B-splines effective completeness

    NASA Astrophysics Data System (ADS)

    Argenti, Luca; Colle, Renato

    2009-09-01

    Effective completeness of B-splines, defined as the capability of approaching completeness without compromising the positive definite character of the corresponding superposition matrix, is investigated. A general result on the limit solution of the spectrum of B-splines superposition matrices has been obtained for a large class of knots grids. The result has been tested on finite-dimensional cases using both constant and random knots spacings (uniform distribution in [0,1]). The eigenvalue distribution for random spacings is found not to exhibit any large deviation from that for constant spacings. As an example of system which takes huge advantage of a non-uniform grid of knots, we have computed few hundreds of hydrogen Rydberg states obtaining accuracy comparable to the machine accuracy. The obtained results give solid ground to the recognized efficiency and accuracy of the B-spline sets when used in atomic physics calculations.

  18. New version of hex-ecs, the B-spline implementation of exterior complex scaling method for solution of electron-hydrogen scattering

    NASA Astrophysics Data System (ADS)

    Benda, Jakub; Houfek, Karel

    2016-07-01

    We provide an updated version of the program hex-ecs originally presented in Comput. Phys. Commun. 185 (2014) 2903-2912. The original version used an iterative method preconditioned by the incomplete LU factorization (ILU), which-though very stable and predictable-requires a large amount of working memory. In the new version we implemented a "separated electrons" (or "Kronecker product approximation", KPA) preconditioner as suggested by Bar-On et al., Appl. Num. Math. 33 (2000) 95-104. This preconditioner has much lower memory requirements, though in return it requires more iterations to reach converged results. By careful choice between ILU and KPA preconditioners one is able to extend the computational feasibility to larger calculations. Secondly, we added the option to run the KPA preconditioner on an OpenCL device (e.g. GPU). GPUs have generally better memory access times, which speeds up particularly the sparse matrix multiplication.

  19. Choosing the optimal number of B-spline control points (Part 2: Approximation of surfaces and applications)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2017-03-01

    Freeform surfaces like B-splines have proven to be a suitable tool to model laser scanner point clouds and to form the basis for an areal data analysis, for example an areal deformation analysis. A variety of parameters determine the B-spline's appearance, the B-spline's complexity being mostly determined by the number of control points. Usually, this parameter type is chosen by intuitive trial-and-error-procedures. In [10] the problem of finding an alternative to these trial-and-error-procedures was addressed for the case of B-spline curves: The task of choosing the optimal number of control points was interpreted as a model selection problem. Two model selection criteria, the Akaike and the Bayesian Information Criterion, were used to identify the B-spline curve with the optimal number of control points from a set of candidate B-spline models. In order to overcome the drawbacks of the information criteria, an alternative approach based on statistical learning theory was developed. The criteria were evaluated by means of simulated data sets. The present paper continues these investigations. If necessary, the methods proposed in [10] are extended to areal approaches so that they can be used to determine the optimal number of B-spline surface control points. Furthermore, the methods are evaluated by means of real laser scanner data sets rather than by simulated ones. The application of those methods to B-spline surfaces reveals the datum problem of those surfaces, meaning that location and number of control points of two B-splines surfaces are only comparable if they are based on the same parameterization. First investigations to solve this problem are presented.

  20. Spline-Screw Multiple-Rotation Mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Mechanism functions like combined robotic gripper and nut runner. Spline-screw multiple-rotation mechanism related to spline-screw payload-fastening system described in (GSC-13454). Incorporated as subsystem in alternative version of system. Mechanism functions like combination of robotic gripper and nut runner; provides both secure grip and rotary actuation of other parts of system. Used in system in which no need to make or break electrical connections to payload during robotic installation or removal of payload. More complicated version needed to make and break electrical connections. Mechanism mounted in payload.

  1. Procedure for converting a Wilson-Fowler spline to a cubic B-spline with double knots

    SciTech Connect

    Fritsch, F.N.

    1987-10-14

    The Wilson-Fowler spline (WF-spline) has been used by the DOE Weapons Complex for over twenty years to represent point-defined smooth curves. Most modern CADCAM systems use parametric B-spline curves (or, more recently, rational B-splines) for this same purpose. The WF-spline is a parametric piecewise cubic curve. It has been shown that a WF-spline can be reparametrized so that its components are C/sup 1/ piecewise cubic functions (functions that are cubic polynomials on each parameter interval, joined so the function and first derivative are continuous). The purpose of these notes is to show explicitly how to convert a given WF-spline to a cubic B-spline with double knots. 7 refs.

  2. Multivariate Epi-splines and Evolving Function Identification Problems

    DTIC Science & Technology

    2015-04-15

    MULTIVARIATE EPI- SPLINES AND EVOLVING FUNCTION IDENTIFICATION PROBLEMS∗ Johannes O. Royset Roger J-B Wets Operations Research Department Department...fitting, and estimation. The paper develops piecewise polynomial functions, called epi- splines , that approximate any lsc function to an arbitrary...level of accuracy. Epi- splines provide the foundation for the solution of a rich class of function identification problems that incorporate general

  3. Trigonometric quadratic B-spline subdomain Galerkin algorithm for the Burgers' equation

    NASA Astrophysics Data System (ADS)

    Ay, Buket; Dag, Idris; Gorgulu, Melis Zorsahin

    2015-12-01

    A variant of the subdomain Galerkin method has been set up to find numerical solutions of the Burgers' equation. Approximate function consists of the combination of the trigonometric B-splines. Integration of Burgers' equation has been achived by aid of the subdomain Galerkin method based on the trigonometric B-splines as an approximate functions. The resulting first order ordinary differential system has been converted into an iterative algebraic equation by use of the Crank-Nicolson method at successive two time levels. The suggested algorithm is tested on somewell-known problems for the Burgers' equation.

  4. A multiresolution analysis for tensor-product splines using weighted spline wavelets

    NASA Astrophysics Data System (ADS)

    Kapl, Mario; Jüttler, Bert

    2009-09-01

    We construct biorthogonal spline wavelets for periodic splines which extend the notion of "lazy" wavelets for linear functions (where the wavelets are simply a subset of the scaling functions) to splines of higher degree. We then use the lifting scheme in order to improve the approximation properties with respect to a norm induced by a weighted inner product with a piecewise constant weight function. Using the lifted wavelets we define a multiresolution analysis of tensor-product spline functions and apply it to image compression of black-and-white images. By performing-as a model problem-image compression with black-and-white images, we demonstrate that the use of a weight function allows to adapt the norm to the specific problem.

  5. Spherical DCB-spline surfaces with hierarchical and adaptive knot insertion.

    PubMed

    Cao, Juan; Li, Xin; Chen, Zhonggui; Qin, Hong

    2012-08-01

    This paper develops a novel surface fitting scheme for automatically reconstructing a genus-0 object into a continuous parametric spline surface. A key contribution for making such a fitting method both practical and accurate is our spherical generalization of the Delaunay configuration B-spline (DCB-spline), a new non-tensor-product spline. In this framework, we efficiently compute Delaunay configurations on sphere by the union of two planar Delaunay configurations. Also, we develop a hierarchical and adaptive method that progressively improves the fitting quality by new knot-insertion strategies guided by surface geometry and fitting error. Within our framework, a genus-0 model can be converted to a single spherical spline representation whose root mean square error is tightly bounded within a user-specified tolerance. The reconstructed continuous representation has many attractive properties such as global smoothness and no auxiliary knots. We conduct several experiments to demonstrate the efficacy of our new approach for reverse engineering and shape modeling.

  6. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  7. A Spline Regression Model for Latent Variables

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.

    2014-01-01

    Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…

  8. Shaft Coupler With Friction and Spline Clutches

    NASA Technical Reports Server (NTRS)

    Thebert, Glenn W.

    1987-01-01

    Coupling, developed for rotor of lift/cruise aircraft, employs two clutches for smooth transmission of power from gas-turbine engine to rotor. Prior to ascent, coupling applies friction-type transition clutch that accelerates rotor shaft to speeds matching those of engine shaft. Once shafts synchronized, spline coupling engaged and friction clutch released to provide positive mechanical drive.

  9. Spline smoothing of histograms by linear programming

    NASA Technical Reports Server (NTRS)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  10. An algorithm for surface smoothing with rational splines

    NASA Technical Reports Server (NTRS)

    Schiess, James R.

    1987-01-01

    Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.

  11. Refinable C(1) spline elements for irregular quad layout.

    PubMed

    Nguyen, Thien; Peters, Jörg

    2016-03-01

    Building on a result of U. Reif on removable singularities, we construct C(1) bi-3 splines that may include irregular points where less or more than four tensor-product patches meet. The resulting space complements PHT splines, is refinable and the refined spaces are nested, preserving for example surfaces constructed from the splines. As in the regular case, each quadrilateral has four degrees of freedom, each associated with one spline and the splines are linearly independent. Examples of use for surface construction and isogeometric analysis are provided.

  12. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    SciTech Connect

    Yankov, A.; Downar, T.

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  13. Partial and interaction spline models for the semiparametric estimation of functions of several variables

    NASA Technical Reports Server (NTRS)

    Wahba, Grace

    1987-01-01

    A partial spline model is a model for a response as a function of several variables, which is the sum of a smooth function of several variables and a parametric function of the same plus possibly some other variables. Partial spline models in one and several variables, with direct and indirect data, with Gaussian errors and as an extension of GLIM to partially penalized GLIM models are described. Application to the modeling of change of regime in several variables is described. Interaction splines are introduced and described and their potential use for modeling non-linear interactions between variables by semiparametric methods is noted. Reference is made to recent work in efficient computational methods.

  14. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  15. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    SciTech Connect

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  16. Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting

    2016-01-01

    The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…

  17. The Learning Burden of Collocations: The Role of Interlexical and Intralexical Factors

    ERIC Educational Resources Information Center

    Peters, Elke

    2016-01-01

    This study investigates whether congruency (+/- literal translation equivalent), collocate-node relationship (adjective-noun, verb-noun, phrasal-verb-noun collocations), and word length influence the learning burden of EFL learners' learning collocations at the initial stage of form-meaning mapping. Eighteen collocations were selected on the basis…

  18. A trans-dimensional polynomial-spline parameterization for gradient-based geoacoustic inversion.

    PubMed

    Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan

    2014-10-01

    This paper presents a polynomial spline-based parameterization for trans-dimensional geoacoustic inversion. The parameterization is demonstrated for both simulated and measured data and shown to be an effective method of representing sediment geoacoustic profiles dominated by gradients, as typically occur, for example, in muddy seabeds. Specifically, the spline parameterization is compared using the deviance information criterion (DIC) to the standard stack-of-homogeneous layers parameterization for the inversion of bottom-loss data measured at a muddy seabed experiment site on the Malta Plateau. The DIC is an information criterion that is well suited to trans-D Bayesian inversion and is introduced to geoacoustics in this paper. Inversion results for both parameterizations are in good agreement with measurements on a sediment core extracted at the site. However, the spline parameterization more accurately resolves the power-law like structure of the core density profile and provides smaller overall uncertainties in geoacoustic parameters. In addition, the spline parameterization is found to be more parsimonious, and hence preferred, according to the DIC. The trans-dimensional polynomial spline approach is general, and applicable to any inverse problem for gradient-based profiles. [Work supported by ONR.].

  19. A space-time spectral collocation algorithm for the variable order fractional wave equation.

    PubMed

    Bhrawy, A H; Doha, E H; Alzaidy, J F; Abdelkawy, M A

    2016-01-01

    The variable order wave equation plays a major role in acoustics, electromagnetics, and fluid dynamics. In this paper, we consider the space-time variable order fractional wave equation with variable coefficients. We propose an effective numerical method for solving the aforementioned problem in a bounded domain. The shifted Jacobi polynomials are used as basis functions, and the variable-order fractional derivative is described in the Caputo sense. The proposed method is a combination of shifted Jacobi-Gauss-Lobatto collocation scheme for the spatial discretization and the shifted Jacobi-Gauss-Radau collocation scheme for temporal discretization. The aforementioned problem is then reduced to a problem consists of a system of easily solvable algebraic equations. Finally, numerical examples are presented to show the effectiveness of the proposed numerical method.

  20. The Effect of Taper Angle and Spline Geometry on the Initial Stability of Tapered, Splined Modular Titanium Stems.

    PubMed

    Pierson, Jeffery L; Small, Scott R; Rodriguez, Jose A; Kang, Michael N; Glassman, Andrew H

    2015-07-01

    Design parameters affecting initial mechanical stability of tapered, splined modular titanium stems (TSMTSs) are not well understood. Furthermore, there is considerable variability in contemporary designs. We asked if spline geometry and stem taper angle could be optimized in TSMTS to improve mechanical stability to resist axial subsidence and increase torsional stability. Initial stability was quantified with stems of varied taper angle and spline geometry implanted in a foam model replicating 2cm diaphyseal engagement. Increased taper angle and a broad spline geometry exhibited significantly greater axial stability (+21%-269%) than other design combinations. Neither taper angle nor spline geometry significantly altered initial torsional stability.

  1. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  2. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  3. Usability Study of Two Collocated Prototype System Displays

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2007-01-01

    Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.

  4. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  5. [Application of spline-based Cox regression on analyzing data from follow-up studies].

    PubMed

    Dong, Ying; Yu, Jin-ming; Hu, Da-yi

    2012-09-01

    With R, this study involved the application of the spline-based Cox regression to analyze data related to follow-up studies when the two basic assumptions of Cox proportional hazards regression were not satisfactory. Results showed that most of the continuous covariates contributed nonlinearly to mortality risk while the effects of three covariates were time-dependent. After considering multiple covariates in spline-based Cox regression, when the ankle brachial index (ABI) decreased by 0.1, the hazard ratio (HR) for all-cause death was 1.071. The spline-based Cox regression method could be applied to analyze the data related to follow-up studies when the assumptions of Cox proportional hazards regression were violated.

  6. Semisupervised feature selection via spline regression for video semantic recognition.

    PubMed

    Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang

    2015-02-01

    To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.

  7. Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling

    NASA Astrophysics Data System (ADS)

    Wu, Xiaofang; Shi, Jianghong

    In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.

  8. Single-grid spectral collocation for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte

    1988-01-01

    The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.

  9. Subcell resolution in simplex stochastic collocation for spatial discontinuities

    NASA Astrophysics Data System (ADS)

    Witteveen, Jeroen A. S.; Iaccarino, Gianluca

    2013-10-01

    Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC-SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers' equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC-SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.

  10. Spline-Screw Payload-Fastening System

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Payload handed off securely between robot and vehicle or structure. Spline-screw payload-fastening system includes mating female and male connector mechanisms. Clockwise (or counter-clockwise) rotation of splined male driver on robotic end effector causes connection between robot and payload to tighten (or loosen) and simultaneously causes connection between payload and structure to loosen (or tighten). Includes mechanisms like those described in "Tool-Changing Mechanism for Robot" (GSC-13435) and "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430). Designed for use in outer space, also useful on Earth in applications needed for secure handling and secure mounting of equipment modules during storage, transport, and/or operation. Particularly useful in machine or robotic applications.

  11. Representing flexible endoscope shapes with hermite splines

    NASA Astrophysics Data System (ADS)

    Chen, Elvis C. S.; Fowler, Sharyle A.; Hookey, Lawrence C.; Ellis, Randy E.

    2010-02-01

    Navigation of a flexible endoscope is a challenging surgical task: the shape of the end effector of the endoscope, interacting with surrounding tissues, determine the surgical path along which the endoscope is pushed. We present a navigational system that visualized the shape of the flexible endoscope tube to assist gastrointestinal surgeons in performing Natural Orifice Translumenal Endoscopic Surgery (NOTES). The system used an electromagnetic positional tracker, a catheter embedded with multiple electromagnetic sensors, and graphical user interface for visualization. Hermite splines were used to interpret the position and direction outputs of the endoscope sensors. We conducted NOTES experiments on live swine involving 6 gastrointestinal and 6 general surgeons. Participants who used the device first were 14.2% faster than when not using the device. Participants who used the device second were 33.6% faster than the first session. The trend suggests that spline-based visualization is a promising adjunct during NOTES procedures.

  12. Curvilinear bicubic spline fit interpolation scheme

    NASA Technical Reports Server (NTRS)

    Chi, C.

    1973-01-01

    Modification of the rectangular bicubic spline fit interpolation scheme so as to make it suitable for use with a polar grid pattern. In the proposed modified scheme the interpolation function is expressed in terms of the radial length and the arc length, and the shape of the patch, which is a wedge or a truncated wedge, is taken into account implicitly. Examples are presented in which the proposed interpolation scheme was used to reproduce the equations of a hemisphere.

  13. Marginal longitudinal semiparametric regression via penalized splines

    PubMed Central

    Kadiri, M. Al; Carroll, R.J.; Wand, M.P.

    2010-01-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models. PMID:21037941

  14. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  15. Continuous Groundwater Monitoring Collocated at USGS Streamgages

    NASA Astrophysics Data System (ADS)

    Constantz, J. E.; Eddy-Miller, C.; Caldwell, R.; Wheeer, J.; Barlow, J.

    2012-12-01

    USGS Office of Groundwater funded a 2-year pilot study collocating groundwater wells for monitoring water level and temperature at several existing continuous streamgages in Montana and Wyoming, while U.S. Army Corps of Engineers funded enhancement to streamgages in Mississippi. To increase spatial relevance with in a given watershed, study sites were selected where near-stream groundwater was in connection with an appreciable aquifer, and where logistics and cost of well installations were considered representative. After each well installation and surveying, groundwater level and temperature were easily either radio-transmitted or hardwired to existing data acquisition system located in streamgaging shelter. Since USGS field personnel regularly visit streamgages during routine streamflow measurements and streamgage maintenance, the close proximity of observation wells resulted in minimum extra time to verify electronically transmitted measurements. After field protocol was tuned, stream and nearby groundwater information were concurrently acquired at streamgages and transmitted to satellite from seven pilot-study sites extending over nearly 2,000 miles (3,200 km) of the central US from October 2009 until October 2011, for evaluating the scientific and engineering add-on value of the enhanced streamgage design. Examination of the four-parameter transmission from the seven pilot study groundwater gaging stations reveals an internally consistent, dynamic data suite of continuous groundwater elevation and temperature in tandem with ongoing stream stage and temperature data. Qualitatively, the graphical information provides appreciation of seasonal trends in stream exchanges with shallow groundwater, as well as thermal issues of concern for topics ranging from ice hazards to suitability of fish refusia, while quantitatively this information provides a means for estimating flux exchanges through the streambed via heat-based inverse-type groundwater modeling. In June

  16. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    SciTech Connect

    Ruberti, M.; Averbukh, V.; Decleva, P.

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.

  17. A Tensor B-Spline Approach for Solving the Diffusion PDE With Application to Optical Diffusion Tomography.

    PubMed

    Shulga, Dmytro; Morozov, Oleksii; Hunziker, Patrick

    2016-12-19

    Optical Diffusion Tomography (ODT) is a modern non-invasive medical imaging modality which requires mathematical modelling of near-infrared light propagation in tissue. Solving the ODT forward problem equation accurately and efficiently is crucial. Typically, the forward problem is represented by a Diffusion PDE and is solved using the Finite Element Method (FEM) on a mesh, which is often unstructured. Tensor B-spline signal processing has the attractive features of excellent interpolation and approximation properties, multiscale properties, fast algorithms and does not require meshing. This paper introduces Tensor B-spline methodology with arbitrary spline degree tailored to solve the ODT forward problem in an accurate and efficient manner. We show that our Tensor B-spline formulation induces efficient and highly parallelizable computational algorithms. Exploitation of B-spline properties for integration over irregular domains proved valuable. The Tensor B-spline solver was tested on standard problems and on synthetic medical data and compared to FEM, including state-ofthe art ODT forward solvers. Results show that 1) a significantly higher accuracy can be achieved with the same number of nodes, 2) fewer nodes are required to achieve a prespecified accuracy, 3) the algorithm converges in significantly fewer iterations to a given error. These findings support the value of Tensor Bspline methodology for high-performance ODT implementations. This may translate into advances in ODT imaging for biomedical research and clinical application.

  18. B-spline algebraic diagrammatic construction: application to photoionization cross-sections and high-order harmonic generation.

    PubMed

    Ruberti, M; Averbukh, V; Decleva, P

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.

  19. Local Adaptive Calibration of the GLASS Surface Incident Shortwave Radiation Product Using Smoothing Spline

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Liang, S.; Wang, G.

    2015-12-01

    Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.

  20. Classification by means of B-spline potential functions with applications to remote sensing

    NASA Technical Reports Server (NTRS)

    Bennett, J. O.; Defigueiredo, R. J. P.; Thompson, J. R.

    1974-01-01

    A method is presented for using B-splines as potential functions in the estimation of likelihood functions (probability density functions conditioned on pattern classes), or the resulting discriminant functions. The consistency of this technique is discussed. Experimental results of using the likelihood functions in the classification of remotely sensed data are given.

  1. Local Refinement of Analysis-Suitable T-splines

    DTIC Science & Technology

    2011-03-01

    important alternative to traditional engineering de- sign and analysis methodologies . Isogeometric analysis was introduced in [1] and later de- scribed in... surfaces , although the con- cepts generalize to arbitrary odd degree. T-splines of arbitrary degree are discussed in [42, 50]. 2. T-spline fundamentals We...The fundamental object of interest underlying T-spline technology is the T-mesh, denoted by T. For surfaces , the T-mesh is a mesh of polygonal

  2. On the spline-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.

  3. Density Estimation of Simulation Output Using Exponential EPI-Splines

    DTIC Science & Technology

    2013-12-01

    ak+1,1, k = 1, 2, ..., N − 1. Pointwise Fisher information. We define the pointwise Fisher information of an exponential epi-spline density h at x to...are required to obtain meaningful results. All exponential epi-splines are computed under the assumptions of continuity, smoothness, pointwise Fisher...Kernel 0.4310 0.3536 In the exponential epi-spline estimates, we include continuity, differentiability, and pointwise Fisher information constraints with

  4. I-spline Smoothing for Calibrating Predictive Models.

    PubMed

    Wu, Yuan; Jiang, Xiaoqian; Kim, Jihoon; Ohno-Machado, Lucila

    2012-01-01

    We proposed the I-spline Smoothing approach for calibrating predictive models by solving a nonlinear monotone regression problem. We took advantage of I-spline properties to obtain globally optimal solutions while keeping the computational cost low. Numerical studies based on three data sets showed the empirical evidences of I-spline Smoothing in improving calibration (i.e.,1.6x, 1.4x, and 1.4x on the three datasets compared to the average of competitors-Binning, Platt Scaling, Isotonic Regression, Monotone Spline Smoothing, Smooth Isotonic Regression) without deterioration of discrimination.

  5. A B-spline approach to phase unwrapping in tagged cardiac MRI for motion tracking.

    PubMed

    Chiang, Patricia; Cai, Yiyu; Mak, Koon Hou; Zheng, Jianmin

    2013-05-01

    A novel B-Spline based approach to phase unwrapping in tagged magnetic resonance images is proposed for cardiac motion tracking. A bicubic B-spline surface is used to model the absolute phase. The phase unwrapping problem is formulated as a mixed integer optimization problem that minimizes the sum of the difference between the spatial gradients of absolute and wrapped phases, and the difference between the rewrapped and wrapped phases. In contrast to the existing techniques for motion tracking, the proposed approach can overcome the limitation of interframe half-tag displacement and increase the robustness of motion tracking. The article further presents a hybrid harmonic phase imaging-B-spline method to take the advantage of the harmonic phase imaging method for small motion and the efficiency of the B-Spline approach for large motion. The proposed approach has been successively applied to a full set of cardiac MRI scans in both long and short axis slices with superior performance when compared with the harmonic phase imaging and quality guided path-following methods.

  6. Collocation and Pattern Recognition Effects on System Failure Remediation

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Press, Hayes N.

    2007-01-01

    Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.

  7. Fatigue crack growth monitoring of idealized gearbox spline component using acoustic emission

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Ozevin, Didem; Hardman, William; Kessler, Seth; Timmons, Alan

    2016-04-01

    The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. The acoustic emission (AE) method is a direct way of detecting active flaws; however, the method suffers from the influence of background noise and location/sensor based pattern recognition method. It is important to identify the source mechanism and adapt it to different test conditions and sensors. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method in a laboratory environment. The test sample has the major details of the spline component on a flattened geometry. The AE data is continuously collected together with strain gauges strategically positions on the structure. The fatigue test characteristics are 4 Hz frequency and 0.1 as the ratio of minimum to maximum loading in tensile regime. It is observed that there are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The frequency spectra of continuous emissions and burst emissions are compared to understand the difference of sudden crack growth and gradual crack growth. The predicted crack growth rate is compared with the AE data using the cumulative AE events at the notch tip. The source mechanism of sudden crack growth is obtained solving the inverse mathematical problem from output signal to input signal. The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method The AE data is continuously collected together with strain gauges. There are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The source mechanism of

  8. Advantage of collocating research facilities The administrator's point of view

    NASA Astrophysics Data System (ADS)

    Spilker, H.-M.; Blomeyer, C.

    1995-02-01

    Research facilities are collocated in order to create a maximum of synergy. This also requires a close cooperation of the administration concerned leading to advantages, in particular with regards to infrastructure and cost effectiveness. Faced with the specificities of the research facilities involved, administrators feel challenged to find appropriate solutions. The successive establishment of research institutes on the Polygone Scientifique in Grenoble is described. Forms and content of administrative collaboration between the Institut Max von Laue-Paul Langevin and the European Synchrotron Radiation Facility are analysed, where collocation has led to intensive cooperation.

  9. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Xue, Junpeng; Gao, Bo; Zuo, Chao; Idir, Mourad

    2017-04-01

    In this work, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. The noise influence is studied by adding white Gaussian noise to the slope data. Experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.

  10. B-spline parametrization of the dielectric function applied to spectroscopic ellipsometry on amorphous carbon

    SciTech Connect

    Weber, J. W.; Hansen, T. A. R.; Sanden, M. C. M. van de; Engeln, R.

    2009-12-15

    The remote plasma deposition of hydrogenated amorphous carbon (a-C:H) thin films is investigated by in situ spectroscopic ellipsometry (SE). The dielectric function of the a-C:H film is in this paper parametrized by means of B-splines. In contrast with the commonly used Tauc-Lorentz oscillator, B-splines are a purely mathematical description of the dielectric function. We will show that the B-spline parametrization, which requires no prior knowledge about the film or its interaction with light, is a fast and simple-to-apply method that accurately determines thickness, surface roughness, and the dielectric constants of hydrogenated amorphous carbon thin films. Analysis of the deposition process provides us with information about the high deposition rate, the nucleation stage, and the homogeneity in depth of the deposited film. Finally, we show that the B-spline parametrization can serve as a stepping stone to physics-based models, such as the Tauc-Lorentz oscillator.

  11. Optimization and dynamics of protein-protein complexes using B-splines.

    PubMed

    Gillilan, Richard E; Lilien, Ryan H

    2004-10-01

    A moving-grid approach for optimization and dynamics of protein-protein complexes is introduced, which utilizes cubic B-spline interpolation for rapid energy and force evaluation. The method allows for the efficient use of full electrostatic potentials joined smoothly to multipoles at long distance so that multiprotein simulation is possible. Using a recently published benchmark of 58 protein complexes, we examine the performance and quality of the grid approximation, refining cocrystallized complexes to within 0.68 A RMSD of interface atoms, close to the optimum 0.63 A produced by the underlying MMFF94 force field. We quantify the theoretical statistical advantage of using minimization in a stochastic search in the case of two rigid bodies, and contrast it with the underlying cost of conjugate gradient minimization using B-splines. The volumes of conjugate gradient minimization basins of attraction in cocrystallized systems are generally orders of magnitude larger than well volumes based on energy thresholds needed to discriminate native from nonnative states; nonetheless, computational cost is significant. Molecular dynamics using B-splines is doubly efficient due to the combined advantages of rapid force evaluation and large simulation step sizes. Large basins localized around the native state and other possible binding sites are identifiable during simulations of protein-protein motion. In addition to providing increased modeling detail, B-splines offer new algorithmic possibilities that should be valuable in refining docking candidates and studying global complex behavior.

  12. Modeling dose-dependent neural processing responses using mixed effects spline models: with application to a PET study of ethanol.

    PubMed

    Guo, Ying; Bowman, F DuBois

    2008-04-01

    For functional neuroimaging studies that involve experimental stimuli measuring dose levels, e.g. of an anesthetic agent, typical statistical techniques include correlation analysis, analysis of variance or polynomial regression models. These standard approaches have limitations: correlation analysis only provides a crude estimate of the linear relationship between dose levels and brain activity; ANOVA is designed to accommodate a few specified dose levels; polynomial regression models have limited capacity to model varying patterns of association between dose levels and measured activity across the brain. These shortcomings prompt the need to develop methods that more effectively capture dose-dependent neural processing responses. We propose a class of mixed effects spline models that analyze the dose-dependent effect using either regression or smoothing splines. Our method offers flexible accommodation of different response patterns across various brain regions, controls for potential confounding factors, and accounts for subject variability in brain function. The estimates from the mixed effects spline model can be readily incorporated into secondary analyses, for instance, targeting spatial classifications of brain regions according to their modeled response profiles. The proposed spline models are also extended to incorporate interaction effects between the dose-dependent response function and other factors. We illustrate our proposed statistical methodology using data from a PET study of the effect of ethanol on brain function. A simulation study is conducted to compare the performance of the proposed mixed effects spline models and a polynomial regression model. Results show that the proposed spline models more accurately capture varying response patterns across voxels, especially at voxels with complex response shapes. Finally, the proposed spline models can be used in more general settings as a flexible modeling tool for investigating the effects of any

  13. Color management with a hammer: the B-spline fitter

    NASA Astrophysics Data System (ADS)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  14. Multiresolution Analysis of UTAT B-spline Curves

    NASA Astrophysics Data System (ADS)

    Lamnii, A.; Mraoui, H.; Sbibih, D.; Zidna, A.

    2011-09-01

    In this paper, we describe a multiresolution curve representation based on periodic uniform tension algebraic trigonometric (UTAT) spline wavelets of class ??? and order four. Then we determine the decomposition and the reconstruction vectors corresponding to UTAT-spline spaces. Finally, we give some applications in order to illustrate the efficiency of the proposed approach.

  15. Convexity preserving C2 rational quadratic trigonometric spline

    NASA Astrophysics Data System (ADS)

    Dube, Mridula; Tiwari, Preeti

    2012-09-01

    A C2 rational quadratic trigonometric spline interpolation has been studied using two kind of rational quadratic trigonometric splines. It is shown that under some natural conditions the solution of the problem exits and is unique. The necessary and sufficient condition that constrain the interpolation curves to be convex in the interpolating interval or subinterval are derived.

  16. The solution of singular optimal control problems using direct collocation and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Downey, James R.; Conway, Bruce A.

    1992-08-01

    This paper describes work on the determination of optimal rocket trajectories which may include singular arcs. In recent years direct collocation and nonlinear programming has proven to be a powerful method for solving optimal control problems. Difficulties in the application of this method can occur if the problem is singular. Techniques exist for solving singular problems indirectly using the associated adjoint formulation. Unfortunately, the adjoints are not a part of the direct formulation. It is shown how adjoint information can be obtained from the direct method to allow the solution of singular problems.

  17. An exponential spline solution of nonlinear Schrödinger equations with constant and variable coefficients

    NASA Astrophysics Data System (ADS)

    Mohammadi, Reza

    2014-03-01

    In this study, the exponential spline scheme is implemented to find a numerical solution of the nonlinear Schrödinger equations with constant and variable coefficients. The method is based on the Crank-Nicolson formulation for time integration and exponential spline functions for space integration. The error analysis, existence, stability, uniqueness and convergence properties of the method are investigated using the energy method. We show that the method is unconditionally stable and accurate of orders O(k+kh+h2) and O(k+kh+h4). This method is tested on three examples by using the cubic nonlinear Schrödinger equation with constant and variable coefficients and the Gross-Pitaevskii equation. The computed results are compared wherever possible with those already available in the literature. The results show that the derived method is easily implemented and approximate the exact solution very well.

  18. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX*

    PubMed Central

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    2015-01-01

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801

  19. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    PubMed

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  20. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  1. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  2. Multiquadric Spline-Based Interactive Segmentation of Vascular Networks

    PubMed Central

    Meena, Sachin; Surya Prasath, V. B.; Kassim, Yasmin M.; Maude, Richard J.; Glinskii, Olga V.; Glinsky, Vladislav V.; Huxley, Virginia H.; Palaniappan, Kannappan

    2016-01-01

    Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches. PMID:28227856

  3. Multiquadric Spline-Based Interactive Segmentation of Vascular Networks.

    PubMed

    Meena, Sachin; Surya Prasath, V B; Kassim, Yasmin M; Maude, Richard J; Glinskii, Olga V; Glinsky, Vladislav V; Huxley, Virginia H; Palaniappan, Kannappan

    2016-08-01

    Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches.

  4. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  5. Beyond Single Words: The Most Frequent Collocations in Spoken English

    ERIC Educational Resources Information Center

    Shin, Dongkwang; Nation, Paul

    2008-01-01

    This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section…

  6. Redefining Creativity--Analyzing Definitions, Collocations, and Consequences

    ERIC Educational Resources Information Center

    Kampylis, Panagiotis G.; Valtanen, Juri

    2010-01-01

    How holistically is human creativity defined, investigated, and understood? Until recently, most scientific research on creativity has focused on its positive side. However, creativity might not only be a desirable resource but also be a potential threat. In order to redefine creativity we need to analyze and understand definitions, collocations,…

  7. Beyond triple collocation: Applications to satellite soil moisture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...

  8. The radial basis function finite collocation approach for capturing sharp fronts in time dependent advection problems

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Power, H.

    2015-10-01

    We propose a node-based local meshless method for advective transport problems that is capable of operating on centrally defined stencils and is suitable for shock-capturing purposes. High spatial convergence rates can be achieved; in excess of eighth-order in some cases. Strongly-varying smooth profiles may be captured at infinite Péclet number without instability, and for discontinuous profiles the solution exhibits neutrally stable oscillations that can be damped by introducing a small artificial diffusion parameter, allowing a good approximation to the shock-front to be maintained for long travel times without introducing spurious oscillations. The proposed method is based on local collocation with radial basis functions (RBFs) in a "finite collocation" configuration. In this approach the PDE governing and boundary equations are enforced directly within the local RBF collocation systems, rather than being reconstructed from fixed interpolating functions as is typical of finite difference, finite volume or finite element methods. In this way the interpolating basis functions naturally incorporate information from the governing PDE, including the strength and direction of the convective velocity field. By using these PDE-enhanced interpolating functions an "implicit upwinding" effect is achieved, whereby the flow of information naturally respects the specifics of the local convective field. This implicit upwinding effect allows high-convergence solutions to be obtained on centred stencils for advection problems. The method is formulated using a high-convergence implicit timestepping algorithm based on Richardson extrapolation. The spatial and temporal convergence of the proposed approach is demonstrated using smooth functions with large gradients. The capture of discontinuities is then investigated, showing how the addition of a dynamic stabilisation parameter can damp the neutrally stable oscillations with limited smearing of the shock front.

  9. Temi firthiani di linguistica applicata: "Restricted Languages" e "Collocation" (Firthian Themes in Applied Linguistics: "Restricted Languages" and "Collocation")

    ERIC Educational Resources Information Center

    Leonardi, Magda

    1977-01-01

    Discusses the importance of two Firthian themes for language teaching. The first theme, "Restricted Languages," concerns the "microlanguages" of every language (e.g., literary language, scientific, etc.). The second theme, "Collocation," shows that equivalent words in two languages rarely have the same position in…

  10. Spline-locking screw fastening strategy

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1992-01-01

    A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotics or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced space manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.

  11. Spline-Locking Screw Fastening Strategy (SLSFS)

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1991-01-01

    A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotic or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.

  12. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts.

    PubMed

    Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2016-10-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models.

  13. Techniques to improve the accuracy of presampling MTF measurement in digital X-ray imaging based on constrained spline regression.

    PubMed

    Zhou, Zhongxing; Zhu, Qingzhen; Zhao, Huijuan; Zhang, Lixin; Ma, Wenjuan; Gao, Feng

    2014-04-01

    To develop an effective curve-fitting algorithm with a regularization term for measuring the modulation transfer function (MTF) of digital radiographic imaging systems, in comparison with representative prior methods, a C-spline regression technique based upon the monotonicity and convex/concave shape restrictions of the edge spread function (ESF) was proposed for ESF estimation in this study. Two types of oversampling techniques and following four curve-fitting algorithms including the C-spline regression technique were considered for ESF estimation. A simulated edge image with a known MTF was used for accuracy determination of algorithms. Experimental edge images from two digital radiography systems were used for statistical evaluation of each curve-fitting algorithm on MTF measurements uncertainties. The simulation results show that the C-spline regression algorithm obtained a minimum MTF measurement error (an average error of 0.12% ± 0.11% and 0.18% ± 0.17% corresponding to two types of oversampling techniques, respectively, up to the cutoff frequency) among all curve-fitting algorithms. In the case of experimental edge images, the C-spline regression algorithm obtained the best uncertainty performance of MTF measurement among four curve-fitting algorithms for both the Pixarray-100 digital specimen radiography system and Hologic full-field digital mammography system. Comparisons among MTF estimates using four curve-fitting algorithms revealed that the proposed C-spline regression technique outperformed other algorithms on MTF measurements accuracy and uncertainty performance.

  14. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    PubMed

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  15. A review on the solution of Grad-Shafranov equation in the cylindrical coordinates based on the Chebyshev collocation technique

    NASA Astrophysics Data System (ADS)

    Amerian, Z.; Salem, M. K.; Salar Elahi, A.; Ghoranneviss, M.

    2017-03-01

    Equilibrium reconstruction consists of identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. Numerous methods exist to solve the Grad-Shafranov equation, describing the equilibrium of plasma confined by an axisymmetric magnetic field. In this paper, we have proposed a new numerical solution to the Grad-Shafranov equation (an axisymmetric, magnetic field transformed in cylindrical coordinates solved with the Chebyshev collocation method) when the source term (current density function) on the right-hand side is linear. The Chebyshev collocation method is a method for computing highly accurate numerical solutions of differential equations. We describe a circular cross-section of the tokamak and present numerical result of magnetic surfaces on the IR-T1 tokamak and then compare the results with an analytical solution.

  16. Quartic Box-Spline Reconstruction on the BCC Lattice.

    PubMed

    Kim, Minho

    2013-02-01

    This paper presents an alternative box-spline filter for the body-centered cubic (BCC) lattice, the seven-direction quartic box-spline M7 that has the same approximation order as the eight-direction quintic box-spline M8 but a lower polynomial degree, smaller support, and is computationally more efficient. When applied to reconstruction with quasi-interpolation prefilters, M7 shows less aliasing, which is verified quantitatively by integral filter metrics and frequency error kernels. To visualize and analyze distributional aliasing characteristics, each spectrum is evaluated on the planes and lines with various orientations.

  17. A Simple and Fast Spline Filtering Algorithm for Surface Metrology.

    PubMed

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement.

  18. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.

  19. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  20. Modelling Of Displacement Washing Of Pulp Bed Using Orthogonal Collocation On Finite Elements

    NASA Astrophysics Data System (ADS)

    Arora, Shelly; Potůček, František; Dhaliwal, S. S.; Kukreja, V. K.

    2009-07-01

    Mechanism of displacement washing of packed bed of porous, compressible and cylindrical particles, e.g., fibers is presented with the help of an axial dispersion model involving Peclet number (Pe) and Biot number (Bi). Bulk fluid concentration and intra-pore solute concentration are related by Langmuir adsorption isotherm. Model equations have been solved using orthogonal collocation on finite elements using Lagrangian interpolating polynomials as base functions. Displacement washing has been simulated using a laboratory washing cell and experiments have been performed on pulp beds formed from unbeaten, unbleached kraft fibers. Model predicted values have been compared with experimental values to check the applicability of the method.

  1. Analysis of myocardial motion using generalized spline models and tagged magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Rose, Stephen E.; Wilson, Stephen J.; Veidt, Martin; Bennett, Cameron J.; Doddrell, David M.

    2000-06-01

    Heart wall motion abnormalities are the very sensitive indicators of common heart diseases, such as myocardial infarction and ischemia. Regional strain analysis is especially important in diagnosing local abnormalities and mechanical changes in the myocardium. In this work, we present a complete method for the analysis of cardiac motion and the evaluation of regional strain in the left ventricular wall. The method is based on the generalized spline models and tagged magnetic resonance images (MRI) of the left ventricle. The whole method combines dynamical tracking of tag deformation, simulating cardiac movement and accurately computing the regional strain distribution. More specifically, the analysis of cardiac motion is performed in three stages. Firstly, material points within the myocardium are tracked over time using a semi-automated snake-based tag tracking algorithm developed for this purpose. This procedure is repeated in three orthogonal axes so as to generate a set of one-dimensional sample measurements of the displacement field. The 3D-displacement field is then reconstructed from this sample set by using a generalized vector spline model. The spline reconstruction of the displacement field is explicitly expressed as a linear combination of a spline kernel function associated with each sample point and a polynomial term. Finally, the strain tensor (linear or nonlinear) with three direct components and three shear components is calculated by applying a differential operator directly to the displacement function. The proposed method is computationally effective and easy to perform on tagged MR images. The preliminary study has shown potential advantages of using this method for the analysis of myocardial motion and the quantification of regional strain.

  2. Semi-parametric analysis of dynamic contrast-enhanced MRI using Bayesian P-splines.

    PubMed

    Schmid, Volker J; Whitcher, Brandon; Yang, Guang-Zhong

    2006-01-01

    Current approaches to quantitative analysis of DCE-MRI with non-linear models involve the convolution of an arterial input function (AIF) with the contrast agent concentration at a voxel or regional level. Full quantification provides meaningful biological parameters but is complicated by the issues related to convergence, (de-)convolution of the AIF, and goodness of fit. To overcome these problems, this paper presents a penalized spline smoothing approach to model the data in a semi-parametric way. With this method, the AIF is convolved with a set of B-splines to produce the design matrix, and modeling of the resulting deconvolved biological parameters is obtained in a way that is similar to the parametric models. Further kinetic parameters are obtained by fitting a non-linear model to the estimated response function and detailed validation of the method, both with simulated and in vivo data is

  3. High Accuracy Spline Explicit Group (SEG) Approximation for Two Dimensional Elliptic Boundary Value Problems

    PubMed Central

    Goh, Joan; Hj. M. Ali, Norhashidah

    2015-01-01

    Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG) iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method. PMID:26182211

  4. Higher-order numerical methods derived from three-point polynomial interpolation

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and Hermitian finite-difference discretization. The equations generally apply for both uniform and variable meshes. Hybrid schemes resulting from different polynomial approximations for first and second derivatives lead to the nonuniform mesh extension of the so-called compact or Pade difference techniques. A variety of fourth-order methods are described and this concept is extended to sixth-order. Solutions with these procedures are presented for the similar and non-similar boundary layer equations with and without mass transfer, the Burgers equation, and the incompressible viscous flow in a driven cavity. Finally, the interpolation procedure is used to derive higher-order temporal integration schemes and results are shown for the diffusion equation.

  5. Classifier calibration using splined empirical probabilities in clinical risk prediction.

    PubMed

    Gaudoin, René; Montana, Giovanni; Jones, Simon; Aylin, Paul; Bottle, Alex

    2015-06-01

    The aims of supervised machine learning (ML) applications fall into three broad categories: classification, ranking, and calibration/probability estimation. Many ML methods and evaluation techniques relate to the first two. Nevertheless, there are many applications where having an accurate probability estimate is of great importance. Deriving accurate probabilities from the output of a ML method is therefore an active area of research, resulting in several methods to turn a ranking into class probability estimates. In this manuscript we present a method, splined empirical probabilities, based on the receiver operating characteristic (ROC) to complement existing algorithms such as isotonic regression. Unlike most other methods it works with a cumulative quantity, the ROC curve, and as such can be tagged onto an ROC analysis with minor effort. On a diverse set of measures of the quality of probability estimates (Hosmer-Lemeshow, Kullback-Leibler divergence, differences in the cumulative distribution function) using simulated and real health care data, our approach compares favourably with the standard calibration method, the pool adjacent violators algorithm used to perform isotonic regression.

  6. A Constrained Spline Estimator of a Hazard Function.

    ERIC Educational Resources Information Center

    Bloxom, Bruce

    1985-01-01

    A constrained quadratic spline is proposed as an estimator of the hazard function of a random variable. A maximum penalized likelihood procedure is used to fit the estimator to a sample of psychological response times. (Author/LMO)

  7. Detail view of redwood spline joinery of woodframe section against ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail view of redwood spline joinery of wood-frame section against adobe addition (measuring tape denotes plumb line from center of top board) - First Theatre in California, Southwest corner of Pacific & Scott Streets, Monterey, Monterey County, CA

  8. Construction of spline functions in spreadsheets to smooth experimental data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A previous manuscript detailed how spreadsheet software can be programmed to smooth experimental data via cubic splines. This addendum corrects a few errors in the previous manuscript and provides additional necessary programming steps. ...

  9. Radiation energy budget studies using collocated AVHRR and ERBE observations

    SciTech Connect

    Ackerman, S.A.; Inoue, Toshiro

    1994-03-01

    Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert. 25 refs., 8 figs.

  10. Isogeometric Divergence-conforming B-splines for the Darcy-Stokes-Brinkman Equations

    DTIC Science & Technology

    2012-01-01

    80 [54] G Stadler, M Gurnis, C Burstedde, L C Wilcox, L Alisic, and O Ghattas. The dynamics of plate tectonics and mantle flow: From local to global...Generalized Stokes equations, B-splines, Isogeometric analysis, Divergence-conforming discretizations 1 1 Introduction The Stokes equations describe a...the outward-facing normal to ∂Ω̂. As specified in the introduction , we choose to enforce no-slip boundary condition weakly using Nitsche’s method [48

  11. Volumetric T-spline Construction Using Boolean Operations

    DTIC Science & Technology

    2013-07-01

    polycube and a surface geometry was presented to construct trivariate T-splines from input triangular meshes. Mapping, subdivision and pillowing techniques...parameterization at the shared boundary. All the detected sharp feature information is preserved in this step. Pillow - ing, smoothing and optimization...mapping, sharp feature preservation, pillowing and quality improvement, handling irregular nodes, trivariate T-spline construction and Bézier

  12. A Corpus-Based Study of the Linguistic Features and Processes Which Influence the Way Collocations Are Formed: Some Implications for the Learning of Collocations

    ERIC Educational Resources Information Center

    Walker, Crayton Phillip

    2011-01-01

    In this article I examine the collocational behaviour of groups of semantically related verbs (e.g., "head, run, manage") and nouns (e.g., "issue, factor, aspect") from the domain of business English. The results of this corpus-based study show that much of the collocational behaviour exhibited by these lexical items can be explained by examining…

  13. Immersogeometric cardiovascular fluid-structure interaction analysis with divergence-conforming B-splines.

    PubMed

    Kamensky, David; Hsu, Ming-Chen; Yu, Yue; Evans, John A; Sacks, Michael S; Hughes, Thomas J R

    2017-02-01

    This paper uses a divergence-conforming B-spline fluid discretization to address the long-standing issue of poor mass conservation in immersed methods for computational fluid-structure interaction (FSI) that represent the influence of the structure as a forcing term in the fluid subproblem. We focus, in particular, on the immersogeometric method developed in our earlier work, analyze its convergence for linear model problems, then apply it to FSI analysis of heart valves, using divergence-conforming B-splines to discretize the fluid subproblem. Poor mass conservation can manifest as effective leakage of fluid through thin solid barriers. This leakage disrupts the qualitative behavior of FSI systems such as heart valves, which exist specifically to block flow. Divergence-conforming discretizations can enforce mass conservation exactly, avoiding this problem. To demonstrate the practical utility of immersogeometric FSI analysis with divergence-conforming B-splines, we use the methods described in this paper to construct and evaluate a computational model of an in vitro experiment that pumps water through an artificial valve.

  14. A family of fourth-order entropy stable nonoscillatory spectral collocation schemes for the 1-D Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2017-02-01

    High-order numerical methods that satisfy a discrete analog of the entropy inequality are uncommon. Indeed, no proofs of nonlinear entropy stability currently exist for high-order weighted essentially nonoscillatory (WENO) finite volume or weak-form finite element methods. Herein, a new family of fourth-order WENO spectral collocation schemes is developed, that are nonlinearly entropy stable for the one-dimensional compressible Navier-Stokes equations. Individual spectral elements are coupled using penalty type interface conditions. The resulting entropy stable WENO spectral collocation scheme achieves design order accuracy, maintains the WENO stencil biasing properties across element interfaces, and satisfies the summation-by-parts (SBP) operator convention, thereby ensuring nonlinear entropy stability in a diagonal norm. Numerical results demonstrating accuracy and nonoscillatory properties of the new scheme are presented for the one-dimensional Euler and Navier-Stokes equations for both continuous and discontinuous compressible flows.

  15. B-spline parameterization of spatial response in a monolithic scintillation camera

    NASA Astrophysics Data System (ADS)

    Solovov, V.; Morozov, A.; Chepel, V.; Domingos, V.; Martins, R.

    2016-09-01

    A framework for parameterization of the light response functions (LRFs) in a scintillation camera is presented. It is based on approximation of the measured or simulated photosensor response with weighted sums of uniform cubic B-splines or their tensor products. The LRFs represented in this way are smooth, computationally inexpensive to evaluate and require much less computer memory than non-parametric alternatives. The parameters are found in a straightforward way by the linear least squares method. Several techniques that allow to reduce the storage and processing power requirements were developed. A software library for fitting simulated and measured light response with spline functions was developed and integrated into an open source software package ANTS2 designed for simulation and data processing for Anger camera type detectors.

  16. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    SciTech Connect

    Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  17. [Calculation of radioimmunochemical determinations by "spline approximation" (author's transl)].

    PubMed

    Nolte, H; Mühlen, A; Hesch, R D; Pape, J; Warnecke, U; Jüppner, H

    1976-06-01

    A simplified method, based on the "spline approximation", is reported for the calculation of the standard curves of radioimmunochemical determinations. It is possible to manipulate the mathematical function with a pocket calculator, thus making it available for a large number of users. It was shown that, in contrast to the usual procedures, it is possible to achieve optimal quality control in the preparation of the standard curves and in the interpolation of unknown plasma samples. The recaluculation of interpolated values from their own standard curve revealed an error of 4.9% which would normally be an error of interpolation. The new method was compared with two established methods for 8 different radioimmunochemical determinations. The measured values of the standard curve showed a weighting, and there was a resulting quality control of these values, which, according to their statistical evalution, were more accurate than those of the others models (Ekins et al., Yalow et al., (1968), in: Radioisotopes in Medicine: in vitro studies (Hayes, R. L., Goswitz, F.A. & Murphy, B. E. P., eds) USA EC, Oak Ridge) and Rodbard et al. (1971), in: Competitive protein Binding Assys(Odell, W. D. & Danghedy, W. H., eds.) Lipincott, Philadelphia and Toronto). In contrast with these other models, the described method makes no mathematical or kinetic preconditions with respect to the dose-response relationship. To achieve optimal reaction conditions, experimentally determined reaction data are preferable to model theories.

  18. Validation and comparison of geostatistical and spline models for spatial stream networks.

    PubMed

    Rushworth, A M; Peterson, E E; Ver Hoef, J M; Bowman, A W

    2015-08-01

    Scientists need appropriate spatial-statistical models to account for the unique features of stream network data. Recent advances provide a growing methodological toolbox for modelling these data, but general-purpose statistical software has only recently emerged, with little information about when to use different approaches. We implemented a simulation study to evaluate and validate geostatistical models that use continuous distances, and penalised spline models that use a finite discrete approximation for stream networks. Data were simulated from the geostatistical model, with performance measured by empirical prediction and fixed effects estimation. We found that both models were comparable in terms of squared error, with a slight advantage for the geostatistical models. Generally, both methods were unbiased and had valid confidence intervals. The most marked differences were found for confidence intervals on fixed-effect parameter estimates, where, for small sample sizes, the spline models underestimated variance. However, the penalised spline models were always more computationally efficient, which may be important for real-time prediction and estimation. Thus, decisions about which method to use must be influenced by the size and format of the data set, in addition to the characteristics of the environmental process and the modelling goals. ©2015 The Authors. Environmetrics published by John Wiley & Sons, Ltd.

  19. On developing B-spline registration algorithms for multi-core processors

    NASA Astrophysics Data System (ADS)

    Shackleford, J. A.; Kandasamy, N.; Sharp, G. C.

    2010-11-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  20. Noise correction on LANDSAT images using a spline-like algorithm

    NASA Technical Reports Server (NTRS)

    Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.

    1985-01-01

    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.

  1. A marginal approach to reduced-rank penalized spline smoothing with application to multilevel functional data.

    PubMed

    Chen, Huaihou; Wang, Yuanjia; Paik, Myunghee Cho; Choi, H Alex

    2013-10-01

    Multilevel functional data is collected in many biomedical studies. For example, in a study of the effect of Nimodipine on patients with subarachnoid hemorrhage (SAH), patients underwent multiple 4-hour treatment cycles. Within each treatment cycle, subjects' vital signs were reported every 10 minutes. This data has a natural multilevel structure with treatment cycles nested within subjects and measurements nested within cycles. Most literature on nonparametric analysis of such multilevel functional data focus on conditional approaches using functional mixed effects models. However, parameters obtained from the conditional models do not have direct interpretations as population average effects. When population effects are of interest, we may employ marginal regression models. In this work, we propose marginal approaches to fit multilevel functional data through penalized spline generalized estimating equation (penalized spline GEE). The procedure is effective for modeling multilevel correlated generalized outcomes as well as continuous outcomes without suffering from numerical difficulties. We provide a variance estimator robust to misspecification of correlation structure. We investigate the large sample properties of the penalized spline GEE estimator with multilevel continuous data and show that the asymptotics falls into two categories. In the small knots scenario, the estimated mean function is asymptotically efficient when the true correlation function is used and the asymptotic bias does not depend on the working correlation matrix. In the large knots scenario, both the asymptotic bias and variance depend on the working correlation. We propose a new method to select the smoothing parameter for penalized spline GEE based on an estimate of the asymptotic mean squared error (MSE). We conduct extensive simulation studies to examine property of the proposed estimator under different correlation structures and sensitivity of the variance estimation to the choice

  2. Composite multi-modal vibration control for a stiffened plate using non-collocated acceleration sensor and piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Li, Shengquan; Li, Juan; Mo, Yueping; Zhao, Rong

    2014-01-01

    A novel active method for multi-mode vibration control of an all-clamped stiffened plate (ACSP) is proposed in this paper, using the extended-state-observer (ESO) approach based on non-collocated acceleration sensors and piezoelectric actuators. Considering the estimated capacity of ESO for system state variables, output superposition and control coupling of other modes, external excitation, and model uncertainties simultaneously, a composite control method, i.e., the ESO based vibration control scheme, is employed to ensure the lumped disturbances and uncertainty rejection of the closed-loop system. The phenomenon of phase hysteresis and time delay, caused by non-collocated sensor/actuator pairs, degrades the performance of the control system, even inducing instability. To solve this problem, a simple proportional differential (PD) controller and acceleration feed-forward with an output predictor design produce the control law for each vibration mode. The modal frequencies, phase hysteresis loops and phase lag values due to non-collocated placement of the acceleration sensor and piezoelectric patch actuator are experimentally obtained, and the phase lag is compensated by using the Smith Predictor technology. In order to improve the vibration control performance, the chaos optimization method based on logistic mapping is employed to auto-tune the parameters of the feedback channel. The experimental control system for the ACSP is tested using the dSPACE real-time simulation platform. Experimental results demonstrate that the proposed composite active control algorithm is an effective approach for suppressing multi-modal vibrations.

  3. Inversion of ellipsometry data using constrained spline analysis.

    PubMed

    Gilliot, Mickaël

    2017-02-01

    Ellipsometry is a highly sensitive and powerful optical technique of thin film characterization. However, the indirect and nonlinear character of the ellipsometric equations requires numerical extraction of interesting information, such as thicknesses and optical constants of unknown layers. A method is described to perform the inversion of ellipsometric spectra for the simultaneous determination of thickness and optical constants without requiring particular assumptions about the shape of a model dielectric function like in the traditional method of data fitting. The method is based on a Kramers-Kronig consistent description of the imaginary part of the dielectric function using a set of points joined by pieces of third-degree polynomials. Particular connection relations constrain the shape of the constructed curve to a physically meaningful curve avoiding oscillations of natural cubic splines. The connection ordinates conditioning the shape of the dielectric function can be used, together with unknown thickness or roughness, as fitting parameters with no restriction on the material nature. Typical examples are presented concerning metal and semiconductors.

  4. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    NASA Astrophysics Data System (ADS)

    Man, Jun; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a sufficiently large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos expansion (PCE) to represent and propagate the uncertainties in parameters and states. However, PCKF suffers from the so-called "curse of dimensionality". Its computational cost increases drastically with the increasing number of parameters and system nonlinearity. Furthermore, PCKF may fail to provide accurate estimations due to the joint updating scheme for strongly nonlinear models. Motivated by recent developments in uncertainty quantification and EnKF, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected at each assimilation step; the "restart" scheme is utilized to eliminate the inconsistency between updated model parameters and states variables. The performance of RAPCKF is systematically tested with numerical cases of unsaturated flow models. It is shown that the adaptive approach and restart scheme can significantly improve the performance of PCKF. Moreover, RAPCKF has been demonstrated to be more efficient than EnKF with the same computational cost.

  5. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    SciTech Connect

    Man, Jun; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  6. [Baseline Correction Algorithm for Raman Spectroscopy Based on Non-Uniform B-Spline].

    PubMed

    Fan, Xian-guang; Wang, Hai-tao; Wang, Xin; Xu, Ying-jie; Wang, Xiu-fen; Que, Jing

    2016-03-01

    As one of the necessary steps for data processing of Raman spectroscopy, baseline correction is commonly used to eliminate the interference of fluorescence spectra. The traditional baseline correction algorithm based on polynomial fitting is simple and easy to implement, but its flexibility is poor due to the uncertain fitting order. In this paper, instead of using polynomial fitting, non-uniform B-spline is proposed to overcome the shortcomings of the traditional method. Based on the advantages of the traditional algorithm, the node vector of non-uniform B-spline is fixed adaptively using the peak position of the original Raman spectrum, and then the baseline is fitted with the fixed order. In order to verify this algorithm, the Raman spectra of parathion-methyl and colza oil are detected and their baselines are corrected using this algorithm, the result is made comparison with two other baseline correction algorithms. The experimental results show that the effect of baseline correction is improved by using this algorithm with a fixed fitting order and less parameters, and there is no over or under fitting phenomenon. Therefore, non-uniform B-spline is proved to be an effective baseline correction algorithm of Raman spectroscopy.

  7. Algebraic grid generation using tensor product B-splines. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Saunders, B. V.

    1985-01-01

    Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.

  8. Spline modelling electron insert factors using routine measurements.

    PubMed

    Biggs, S; Sobolewski, M; Murry, R; Kenny, J

    2016-01-01

    There are many methods available to predict electron output factors; however, many centres still measure the factors for each irregular electron field. Creating an electron output factor prediction model that approaches measurement accuracy--but uses already available data and is simple to implement--would be advantageous in the clinical setting. This work presents an empirical spline model for output factor prediction that requires only the measured factors for arbitrary insert shapes. Equivalent ellipses of the insert shapes are determined and then parameterised by width and ratio of perimeter to area. This takes into account changes in lateral scatter, bremsstrahlung produced in the insert material, and scatter from the edge of the insert. Agreement between prediction and measurement for the 12 MeV validation data had an uncertainty of 0.4% (1SD). The maximum recorded deviation between measurement and prediction over the range of energies was 1.0%. The validation methodology showed that one may expect an approximate uncertainty of 0.5% (1SD) when as little as eight data points are used. The level of accuracy combined with the ease with which this model can be generated demonstrates its suitability for clinical use. Implementation of this method is freely available for download at https://github.com/SimonBiggs/electronfactors.

  9. Assessment of adequate quality and collocation of reference measurements with space borne hyperspectral infrared instruments to validate retrievals of temperature and water vapour

    NASA Astrophysics Data System (ADS)

    Calbet, X.

    2015-06-01

    A method is presented to assess whether a given reference ground based point observation, typically a radiosonde measurement, is adequately collocated and sufficiently representative of space borne hyperspectral infrared instrument measurements. Once this assessment is made, the ground based data can be used to validate and potentially calibrate, with a high degree of accuracy, the hyperspectral retrievals of temperature and water vapour.

  10. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  11. A spectral collocation algorithm for two-point boundary value problem in fiber Raman amplifier equations

    NASA Astrophysics Data System (ADS)

    Tarman, Hakan I.; Berberoğlu, Halil

    2009-04-01

    A novel algorithm implementing Chebyshev spectral collocation (pseudospectral) method in combination with Newton's method is proposed for the nonlinear two-point boundary value problem (BVP) arising in solving propagation equations in fiber Raman amplifier. Moreover, an algorithm to train the known linear solution for use as a starting solution for the Newton iteration is proposed and successfully implemented. The exponential accuracy obtained by the proposed Chebyshev pseudospectral method is demonstrated on a case of the Raman propagation equations with strong nonlinearities. This is in contrast to algebraic accuracy obtained by typical solvers used in the literature. The resolving power and the efficiency of the underlying Chebyshev grid are demonstrated in comparison to a known BVP solver.

  12. Optimization of aspheric multifocal contact lens by spline curve

    NASA Astrophysics Data System (ADS)

    Lien, Vu. T.; Chen, Chao-Chang A.; Qiu, Yu-Ting

    2016-10-01

    This paper presents a solution for design aspheric multifocal contact lens with various add powers. The multi-aspheric curve on the optical surface profile is replaced by a single freeform spline curve. A cubic spline curve is optimized to remove all unsmooth transitions between different vision correction zones and still satisfy the power distribution of the aspheric multifocal contact lens. The result shows that the contact lens using a cubic spline curve could provide not only a smooth lens surface profile but also a smooth power distribution that is difficultly obtained by an aspheric multifocal contact lens. The proposed contact lens is easily transferred to CAD format for further analysis or manufacture. Results of this study can be further applied for progressive contact lens design.

  13. Simple spline-function equations for fracture mechanics calculations

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1979-01-01

    The paper presents simple spline-function equations for fracture mechanics calculations. A spline function is a sequence of piecewise polynomials of degree n greater than 1 whose coefficients are such that the function and its first n-1 derivatives are continuous. Second-degree spline equations are presented for the compact, three point bend, and crack-line wedge-loaded specimens. Some expressions can be used directly, so that for a cyclic crack propagation test using a compact specimen, the equation given allows the cracklength to be calculated from the slope of the load-displacement curve. For an R-curve test, equations allow the crack length and stress intensity factor to be calculated from the displacement and the displacement ratio.

  14. Error bounded conic spline approximation for NC code

    NASA Astrophysics Data System (ADS)

    Shen, Liyong

    2012-01-01

    Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.

  15. Error bounded conic spline approximation for NC code

    NASA Astrophysics Data System (ADS)

    Shen, Liyong

    2011-12-01

    Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.

  16. BOX SPLINE BASED 3D TOMOGRAPHIC RECONSTRUCTION OF DIFFUSION PROPAGATORS FROM MRI DATA.

    PubMed

    Ye, Wenxing; Portnoy, Sharon; Entezari, Alireza; Vemuri, Baba C; Blackband, Stephen J

    2011-06-09

    This paper introduces a tomographic approach for reconstruction of diffusion propagators, P( r ), in a box spline framework. Box splines are chosen as basis functions for high-order approximation of P( r ) from the diffusion signal. Box splines are a generalization of B-splines to multivariate setting that are particularly useful in the context of tomographic reconstruction. The X-Ray or Radon transform of a (tensor-product B-spline or a non-separable) box spline is a box spline - the space of box splines is closed under the Radon transform.We present synthetic and real multi-shell diffusion-weighted MR data experiments that demonstrate the increased accuracy of P( r ) reconstruction as the order of basis functions is increased.

  17. AnL1 smoothing spline algorithm with cross validation

    NASA Astrophysics Data System (ADS)

    Bosworth, Ken W.; Lall, Upmanu

    1993-08-01

    We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.

  18. Collocated electrodynamic FDTD schemes using overlapping Yee grids and higher-order Hodge duals

    NASA Astrophysics Data System (ADS)

    Deimert, C.; Potter, M. E.; Okoniewski, M.

    2016-12-01

    The collocated Lebedev grid has previously been proposed as an alternative to the Yee grid for electromagnetic finite-difference time-domain (FDTD) simulations. While it performs better in anisotropic media, it performs poorly in isotropic media because it is equivalent to four overlapping, uncoupled Yee grids. We propose to couple the four Yee grids and fix the Lebedev method using discrete exterior calculus (DEC) with higher-order Hodge duals. We find that higher-order Hodge duals do improve the performance of the Lebedev grid, but they also improve the Yee grid by a similar amount. The effectiveness of coupling overlapping Yee grids with a higher-order Hodge dual is thus questionable. However, the theoretical foundations developed to derive these methods may be of interest in other problems.

  19. Spectral collocation and a two-level continuation scheme for dipolar Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Jeng, B.-W.; Chien, C.-S.; Chern, I.-L.

    2014-01-01

    We exploit the high accuracy of spectral collocation methods in the context of a two-level continuation scheme for computing ground state solutions of dipolar Bose-Einstein condensates (BEC), where the first kind Chebyshev polynomials and Fourier sine functions are used as the basis functions for the trial function space. The governing Gross-Pitaevskii equation (or Schrödinger equation) can be reformulated as a Schrödinger-Poisson (SP) type system [13]. The two-level continuation scheme is developed for tracing the first solution curves of the SP system, which in turn provide an appropriate initial guess for the Newton method to compute ground state solutions for 3D dipolar BEC. Extensive numerical experiments on 3D dipolar BEC and dipolar BEC in optical lattices are reported.

  20. How to fly an aircraft with control theory and splines

    NASA Technical Reports Server (NTRS)

    Karlsson, Anders

    1994-01-01

    When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.

  1. Near minimally normed spline quasi-interpolants on uniform partitions

    NASA Astrophysics Data System (ADS)

    Barrera, D.; Ibanez, M. J.; Sablonniere, P.; Sbibih, D.

    2005-09-01

    Spline quasi-interpolants (QIs) are local approximating operators for functions or discrete data. We consider the construction of discrete and integral spline QIs on uniform partitions of the real line having small infinity norms. We call them near minimally normed QIs: they are exact on polynomial spaces and minimize a simple upper bound of their infinity norms. We give precise results for cubic and quintic QIs. Also the QI error is considered, as well as the advantage that these QIs present when approximating functions with isolated discontinuities.

  2. Investigating the Impact of Explicit Collocation Instruction on ESL Learners' Writing Ability

    ERIC Educational Resources Information Center

    Adhami-O'Brian, Soolmaz

    2014-01-01

    The present study was conducted to explore the impact of explicit collocation instruction on the ESL learners' writing ability. Furthermore, this study was an attempt to find if there is any significant difference between male and female learners on their use of collocations in writing tasks. In so doing, 63 advanced English as a Second Language…

  3. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    ERIC Educational Resources Information Center

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  4. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…

  5. Study on the Causes and Countermeasures of the Lexical Collocation Mistakes in College English

    ERIC Educational Resources Information Center

    Yan, Hansheng

    2010-01-01

    The lexical collocation in English is an important content in the linguistics theory, and also a research topic which is more and more emphasized in English teaching practice of China. The collocation ability of English decides whether learners could masterly use real English in effective communication. In many years' English teaching practice,…

  6. Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing

    ERIC Educational Resources Information Center

    Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh

    2015-01-01

    Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of…

  7. Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations

    ERIC Educational Resources Information Center

    Liu, Dilin

    2010-01-01

    Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…

  8. Accuracy enhancement of digital image correlation with B-spline interpolation

    NASA Astrophysics Data System (ADS)

    Luu, Long; Wang, Zhaoyang; Vo, Minh; Hoang, Thang; Ma, Jun

    2011-08-01

    The interpolation algorithm plays an essential role in the digital image correlation (DIC) technique for shape, deformation, and motion measurements with subpixel accuracies. At the present, little effort has been made to improve the interpolation methods used in DIC. In this Letter, a family of recursive interpolation schemes based on B-spline representation and its inverse gradient weighting version is employed to enhance the accuracy of DIC analysis. Theories are introduced, and simulation results are presented to illustrate the effectiveness of the method as compared with the common bicubic interpolation.

  9. Coefficient of restitution in fractional viscoelastic compliant impacts using fractional Chebyshev collocation

    NASA Astrophysics Data System (ADS)

    Dabiri, Arman; Butcher, Eric A.; Nazari, Morad

    2017-02-01

    Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.

  10. Power-scalable video encoder for mobile devices based on collocated motion estimation

    NASA Astrophysics Data System (ADS)

    Jung, Joel; Bourge, Arnaud

    2004-01-01

    In this paper, a method for designing low-power video schemes is presented. Algorithms that imply a very low dissipation are required for new applications where the energy source is limited, e.g. mobile phones including a camera and video features. Whereas it can be observed that video standards are mainly designed around coding efficiency, we propose to take into account power consumption characteristics directly when designing the algorithm. More precisely, we give some guidelines for the design of low-power video codecs in the scope of modern hardware architectures and we introduce the notion of power scalability. We present an original encoder based on so-called 'Collocated Motion Estimation' designed using the proposed methodology. Experimental results show that we remain close to the coding efficiency of the reference H.264 baseline encoder while the power consumption is largely reduced in our solution. Moreoever this encoder is scalable in memory transfer and computational complexity.

  11. A stable interface element scheme for the p-adaptive lifting collocation penalty formulation

    NASA Astrophysics Data System (ADS)

    Cagnone, J. S.; Nadarajah, S. K.

    2012-02-01

    This paper presents a procedure for adaptive polynomial refinement in the context of the lifting collocation penalty (LCP) formulation. The LCP scheme is a high-order unstructured discretization method unifying the discontinuous Galerkin, spectral volume, and spectral difference schemes in single differential formulation. Due to the differential nature of the scheme, the treatment of inter-cell fluxes for spatially varying polynomial approximations is not straightforward. Specially designed elements are proposed to tackle non-conforming polynomial approximations. These elements are constructed such that a conforming interface between polynomial approximations of different degrees is recovered. The stability and conservation properties of the scheme are analyzed and various inviscid compressible flow calculations are performed to demonstrate the potential of the proposed approach.

  12. Thin-plate spline quadrature of geodetic integrals

    NASA Technical Reports Server (NTRS)

    Vangysen, Herman

    1989-01-01

    Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.

  13. Radial Splines Would Prevent Rotation Of Bearing Race

    NASA Technical Reports Server (NTRS)

    Kaplan, Ronald M.; Chokshi, Jaisukhlal V.

    1993-01-01

    Interlocking fine-pitch ribs and grooves formed on otherwise flat mating end faces of housing and outer race of rolling-element bearing to be mounted in housing, according to proposal. Splines bear large torque loads and impose minimal distortion on raceway.

  14. On the Permanence Property in Spherical Spline Interpolation,

    DTIC Science & Technology

    1982-11-01

    asoects. Bollettino di Geodesia e scienze affini, vol. 1, 105 - 120. Freeden. W., Reuter, R. (1982): Spherical harmonic splines. Methoden und Verfahren der...least squares problems. Ballettino di Geodesia e scienze affini, vol. XXXV, No. 1, 181 - 210. Meiss], P. (1981): The use of finite elements in Physical

  15. Cubic generalized B-splines for interpolation and nonlinear filtering of images

    NASA Astrophysics Data System (ADS)

    Tshughuryan, Heghine

    1997-04-01

    This paper presents the introduction and using of the generalized or parametric B-splines, namely the cubic generalized B-splines, in various signal processing applications. The theory of generalized B-splines is briefly reviewed and also some important properties of generalized B-splines are investigated. In this paper it is shown the use of generalized B-splines as a tool to solve the quasioptimal algorithm problem for nonlinear filtering. Finally, the experimental results are presented for oscillatory and other signals and images.

  16. Robust Stabilizing Compensators for Flexible Structures with Collocated Controls

    NASA Technical Reports Server (NTRS)

    Balakrishman, A. V.

    1996-01-01

    For flexible structures with collocated rate and attitude sensors/actuators, we characterize compensator transfer functions which guarantee modal stability even when stiffness/inertia parameters are uncertain. While the compensators are finite-dimensional, the structure models are allowed to be infinite-dimensional (continuum models), with attendant complexity of the notion of stability; thus exponential stability is not possible and the best we can obtain is strong stability. Robustness is interpreted essentially as maintaining stability in the worst case. The conditions require that the compensator transfer functions be positive real and use is made of the Kalman-Yakubovic lemma to characterize them further. The concept of positive realness is shown to be equivalent to dissipativity in infinite dimensions. In particular we show that for a subclass of compensators it is possible to make the system strongly stable as well as dissipative in an appropriate energy norm.

  17. Local validation of EU-DEM using Least Squares Collocation

    NASA Astrophysics Data System (ADS)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  18. Robustness properties of LQG optimized compensators for collocated rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    In this paper we study the robustness with respect to stability of the closed-loop system with collocated rate sensor using LQG (mean square rate) optimized compensators. Our main result is that the transmission zeros of the compensator are precisely the structure modes when the actuator/sensor locations are 'pinned' and/or 'clamped': i.e., motion in the direction sensed is not allowed. We have stability even under parameter mismatch, except in the unlikely situation where such a mode frequency of the assumed system coincides with an undamped mode frequency of the real system and the corresponding mode shape is an eigenvector of the compensator transfer function matrix at that frequency. For a truncated modal model - such as that of the NASA LaRC Phase Zero Evolutionary model - the transmission zeros of the corresponding compensator transfer function can be interpreted as the structure modes when motion in the directions sensed is prohibited.

  19. Free-form deformation using lower-order B-spline for nonrigid image registration.

    PubMed

    Sun, Wei; Niessen, Wiro J; Klein, Stefan

    2014-01-01

    In traditional free-form deformation (FFD) based registration, a B-spline basis function is commonly utilized to build the transformation model. As the B-spline order increases, the corresponding B-spline function becomes smoother. However, the higher-order B-spline has a larger support region, which means higher computational cost. For a given D-dimensional nth-order B-spline, an mth-order B-spline where (m < or = n) has (m +1/n + 1)D times lower computational complexity. Generally, the third-order B-spline is regarded as keeping a good balance between smoothness and computation time. A lower-order function is seldom used to construct the deformation field for registration since it is less smooth. In this research, we investigated whether lower-order B-spline functions can be utilized for efficient registration, by using a novel stochastic perturbation technique in combination with a postponed smoothing technique to higher B-spline order. Experiments were performed with 3D lung and brain scans, demonstrating that the lower-order B-spline FFD in combination with the proposed perturbation and postponed smoothing techniques even results in better accuracy and smoothness than the traditional third-order B-spline registration, while substantially reducing computational costs.

  20. B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms

    SciTech Connect

    Bueno, G.; Ruiz, M.; Sanchez, S

    2006-10-04

    Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.

  1. B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms

    NASA Astrophysics Data System (ADS)

    Bueno, G.; Sánchez, S.; Ruiz, M.

    2006-10-01

    Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.

  2. Uniform B-Spline Curve Interpolation with Prescribed Tangent and Curvature Vectors.

    PubMed

    Okaniwa, Shoichi; Nasri, Ahmad; Lin, Hongwei; Abbas, Abdulwahed; Kineri, Yuki; Maekawa, Takashi

    2012-09-01

    This paper presents a geometric algorithm for the generation of uniform cubic B-spline curves interpolating a sequence of data points under tangent and curvature vectors constraints. To satisfy these constraints, knot insertion is used to generate additional control points which are progressively repositioned using corresponding geometric rules. Compared to existing schemes, our approach is capable of handling plane as well as space curves, has local control, and avoids the solution of the typical linear system. The effectiveness of the proposed algorithm is illustrated through several comparative examples. Applications of the method in NC machining and shape design are also outlined.

  3. Evaluation of the spline reconstruction technique for PET

    SciTech Connect

    Kastis, George A. Kyriakopoulou, Dimitra; Gaitanis, Anastasios; Fernández, Yolanda; Hutton, Brian F.; Fokas, Athanasios S.

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real

  4. The determination of gravity anomalies from geoid heights using the inverse Stokes' formula, Fourier transforms, and least squares collocation

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Sjoeberg, L.; Rapp, R. H.

    1978-01-01

    A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.

  5. Fast simulation of x-ray projections of spline-based surfaces using an append buffer

    NASA Astrophysics Data System (ADS)

    Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca

    2012-10-01

    Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, the simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640 × 480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically.

  6. [Identification of protoporphyrin IX fluorescence spectrum in human blood serum by biorthogonal spline wavelet].

    PubMed

    Zhu, Dian-ming; Jin, Wan-xiang; Luo, Xiao-sen; Liu, Ying; Shen, Zhong-hua; Lu, Jian; Ni, Xiao-wu

    2008-08-01

    For the low content and weak fluorescence intensity, usually presenting shoulder peaks, it is often hard to locate protoporphyrin IX and identify its fluorescence intensity in human blood serum. Biorthogonal spline wavelet may work for the identification of its weak signal Superimposing protoporphyrin IX fluorescence signal on the background of blood serum spectrum, a series of varied fluorescence spectra of them can be obtained. The protoporphyrin IX fluorescence signal from blood serum background is separated and the fluorescence spectrum can be divided into corresponding discrete approximate signals (a1-a7) and discrete details signals (d1-d7) by biorthogonal spline wavelet bior 5.5 seven levels decomposition. The signal frequency shows a gradual decrease with increasing decomposition. Protoporphyrin IX fluorescence peak emerges when it comes to the 7th decomposition. The signal peak shifts about 2.5 mm downwards as the signal intensity decreases, whereas the signal peak from wavelet filter remains where it was. As the synchronization disappears between signal intensity and signal peak, usually it is hard to assure the fluorescence intensity and peak location. However, signal from wavelet filter may ignore the affect and identify the protoporphyrin IX in human blood serum with the help of biorthogonal spline wavelet. As the linear alternation of wavelet and discrete details signals maintain their inborn linear relations, the authors can carry out the qualitative and quantitative analysis for the precise content and quantity of protoporphyrin IX in blood serum, which provides a feasible method for the application of blood serum fluorescence spectrum to tumor early diagnosis.

  7. Nonlinear identification using a B-spline neural network and chaotic immune approaches

    NASA Astrophysics Data System (ADS)

    dos Santos Coelho, Leandro; Pessôa, Marcelo Wicthoff

    2009-11-01

    One of the important applications of B-spline neural network (BSNN) is to approximate nonlinear functions defined on a compact subset of a Euclidean space in a highly parallel manner. Recently, BSNN, a type of basis function neural network, has received increasing attention and has been applied in the field of nonlinear identification. BSNNs have the potential to "learn" the process model from input-output data or "learn" fault knowledge from past experience. BSNN can be used as function approximators to construct the analytical model for residual generation too. However, BSNN is trained by gradient-based methods that may fall into local minima during the learning procedure. When using feed-forward BSNNs, the quality of approximation depends on the control points (knots) placement of spline functions. This paper describes the application of a modified artificial immune network inspired optimization method - the opt-aiNet - combined with sequences generate by Hénon map to provide a stochastic search to adjust the control points of a BSNN. The numerical results presented here indicate that artificial immune network optimization methods are useful for building good BSNN model for the nonlinear identification of two case studies: (i) the benchmark of Box and Jenkins gas furnace, and (ii) an experimental ball-and-tube system.

  8. Defining window-boundaries for genomic analyses using smoothing spline techniques

    DOE PAGES

    Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; ...

    2015-04-17

    High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore » and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less

  9. Defining window-boundaries for genomic analyses using smoothing spline techniques

    SciTech Connect

    Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia

    2015-04-17

    High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the data and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.

  10. Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines

    NASA Astrophysics Data System (ADS)

    Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2016-10-01

    Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th

  11. BSR: B-spline atomic R-matrix codes

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg

    2006-02-01

    BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput

  12. Collocated comparisons of continuous and filter-based PM2.5 measurements at Fort McMurray, Alberta, Canada

    PubMed Central

    Hsu, Yu-Mei; Wang, Xiaoliang; Chow, Judith C.; Watson, John G.; Percy, Kevin E.

    2016-01-01

    ABSTRACT Collocated comparisons for three PM2.5 monitors were conducted from June 2011 to May 2013 at an air monitoring station in the residential area of Fort McMurray, Alberta, Canada, a city located in the Athabasca Oil Sands Region. Extremely cold winters (down to approximately −40°C) coupled with low PM2.5 concentrations present a challenge for continuous measurements. Both the tapered element oscillating microbalance (TEOM), operated at 40°C (i.e., TEOM40), and Synchronized Hybrid Ambient Real-time Particulate (SHARP, a Federal Equivalent Method [FEM]), were compared with a Partisol PM2.5 U.S. Federal Reference Method (FRM) sampler. While hourly TEOM40 PM2.5 were consistently ~20–50% lower than that of SHARP, no statistically significant differences were found between the 24-hr averages for FRM and SHARP. Orthogonal regression (OR) equations derived from FRM and TEOM40 were used to adjust the TEOM40 (i.e., TEOMadj) and improve its agreement with FRM, particularly for the cold season. The 12-year-long hourly TEOMadj measurements from 1999 to 2011 based on the OR equations between SHARP and TEOM40 were derived from the 2-year (2011–2013) collocated measurements. The trend analysis combining both TEOMadj and SHARP measurements showed a statistically significant decrease in PM2.5 concentrations with a seasonal slope of −0.15 μg m−3 yr−1 from 1999 to 2014.Implications: Consistency in PM2.5 measurements are needed for trend analysis. Collocated comparison among the three PM2.5 monitors demonstrated the difference between FRM and TEOM, as well as between SHARP and TEOM. The orthogonal regressions equations can be applied to correct historical TEOM data to examine long-term trends within the network. PMID:26727574

  13. Genetic evaluation of growth in a multibreed beef cattle population using random regression-linear spline models.

    PubMed

    Sánchez, J P; Misztal, I; Aguilar, I; Bertrand, J K

    2008-02-01

    The objective of this study was to examine the feasibility of using random regression-spline (RR-spline) models for fitting growth traits in a multibreed beef cattle population. To meet the objective, the results from the RR-spline model were compared with the widely used multitrait (MT) model when both were fit to a data set (1.8 million records and 1.1 million animals) provided by the American Gelbvieh Association. The effect of prior information on the EBV of sires was also investigated. In both RR-spline and MT models, the following effects were considered: individual direct and maternal additive genetic effects, contemporary group, age of the animal at measurement, direct and maternal heterosis, and direct and maternal additive genetic mean effect of the breed. Additionally, the RR-spline model included an individual direct permanent environmental effect. When both MT and RR-spline models were applied to a data set containing records for weaning weight (WWT) and yearling weight (YWT) within specified age ranges, the rankings of bulls' direct EBV (as measured via Pearson correlations) provided by both models were comparable, with slightly greater differences in the reranking of bulls observed for YWT evaluations (>or=0.99 for BWT and WWT and >or=0.98 for YWT); also, some bulls dropped from the top 100 list when these lists were compared across methods. For maternal effects, the estimated correlations were slightly smaller, particularly for YWT; again, some drops from the top 100 animals were observed. As in regular MT multibreed genetic evaluations, the heterosis effects and the additive genetic effects of the breed could not be estimated from field data, because there were not enough contemporary groups with the proper composition of purebred and crossbred animals; thus, prior information based on literature values had to be included. The inclusion of prior information had a negligible effect in the overall ranking for bulls with greater than 20 birth weight

  14. On the role of exponential splines in image interpolation.

    PubMed

    Kirshner, Hagai; Porat, Moshe

    2009-10-01

    A Sobolev reproducing-kernel Hilbert space approach to image interpolation is introduced. The underlying kernels are exponential functions and are related to stochastic autoregressive image modeling. The corresponding image interpolants can be implemented effectively using compactly-supported exponential B-splines. A tight l(2) upper-bound on the interpolation error is then derived, suggesting that the proposed exponential functions are optimal in this regard. Experimental results indicate that the proposed interpolation approach with properly-tuned, signal-dependent weights outperforms currently available polynomial B-spline models of comparable order. Furthermore, a unified approach to image interpolation by ideal and nonideal sampling procedures is derived, suggesting that the proposed exponential kernels may have a significant role in image modeling as well. Our conclusion is that the proposed Sobolev-based approach could be instrumental and a preferred alternative in many interpolation tasks.

  15. Spline function approximation for velocimeter Doppler frequency measurement

    NASA Technical Reports Server (NTRS)

    Savakis, Andreas E.; Stoughton, John W.; Kanetkar, Sharad V.

    1989-01-01

    A spline function approximation approach for measuring the Doppler spectral peak frequency in a laser Doppler velocimeter system is presented. The processor is designed for signal bursts with mean Doppler shift frequencies up to 100 MHz, input turbulence up to 20 percent, and photon counts as low as 300. The frequency-domain processor uses a bank of digital bandpass filters for the capture of the energy spectrum of each signal burst. The average values of the filter output energies, as a function of normalized frequency, are modeled as deterministic spline functions which are linearly weighted to evaluate the spectral peak location associated with the Doppler shift. The weighting coefficients are chosen to minimize the mean square error. Performance evaluation by simulation yields average errors in estimating mean Doppler frequencies within 0.5 percent for poor signal-to-noise conditions associated with a low photon count of 300 photons/burst.

  16. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  17. Collocation Schemes for Nonlinear Index 1 DAEs with a Singular Point

    NASA Astrophysics Data System (ADS)

    Dick, A.; Koch, O.; März, R.; Weinmüller, E.

    2011-09-01

    We discuss the convergence behavior of collocation schemes applied to approximate solutions of BVPs in nonlinear index 1 DAEs, which exhibit a critical point at the left boundary. Such a critical point of the DAE causes a singularity in the inherent nonlinear ODE system. In particular, we focus on the case when the inherent ODE system is singular with a singularity of the first kind and apply polynomial collocation to the original DAE system. We show that for a certain class of well-posed boundary value problems in DAEs having a sufficiently smooth solution, the global error of the collocation scheme converges in the collocation points with the so-called stage order. The theoretical results are supported by numerical experiments.

  18. Predicting pregnancy outcomes using longitudinal information: a penalized splines mixed-effects model approach.

    PubMed

    De la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Lee, Dae-Jin; Arribas-Gil, Ana

    2017-02-19

    We propose a semiparametric nonlinear mixed-effects model (SNMM) using penalized splines to classify longitudinal data and improve the prediction of a binary outcome. The work is motivated by a study in which different hormone levels were measured during the early stages of pregnancy, and the challenge is using this information to predict normal versus abnormal pregnancy outcomes. The aim of this paper is to compare models and estimation strategies on the basis of alternative formulations of SNMMs depending on the characteristics of the data set under consideration. For our motivating example, we address the classification problem using a particular case of the SNMM in which the parameter space has a finite dimensional component (fixed effects and variance components) and an infinite dimensional component (unknown function) that need to be estimated. The nonparametric component of the model is estimated using penalized splines. For the parametric component, we compare the advantages of using random effects versus direct modeling of the correlation structure of the errors. Numerical studies show that our approach improves over other existing methods for the analysis of this type of data. Furthermore, the results obtained using our method support the idea that explicit modeling of the serial correlation of the error term improves the prediction accuracy with respect to a model with random effects, but independent errors. Copyright © 2017 John Wiley & Sons, Ltd.

  19. From Data to Assessments and Decisions: Epi-Spline Technology

    DTIC Science & Technology

    2014-05-08

    studies , additional information derives from a much wider range of sources. Data is never obtained in a vacuum, but rather in a context that provides...examine whether the solutions of the approximate problems are indeed approximations of solutions of the actual problem, study as- sociated convergence... studies of this area, especially in higher dimensions. 4 Background on Epi-Convergence The examination of epi-splines and the relationship between the

  20. Uncertainty Quantification using Epi-Splines and Soft Information

    DTIC Science & Technology

    2012-06-01

    prediction of the behavior of constructed models of phenomena in physics, 1 biology, chemistry , ecology, engineered sytems, politics, etc. ... Results...soft information is often more qualitative in nature coming from a human understanding of characteristics of the system output. Engineered systems are... engineering column example illustrates the ability of the epi-spline framework to perform well under a more complex system function where we have

  1. Control theory and splines, applied to signature storage

    NASA Technical Reports Server (NTRS)

    Enqvist, Per

    1994-01-01

    In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.

  2. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGES

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  3. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  4. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  5. Explicit B-spline regularization in diffeomorphic image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools.

  6. Fast space-variant elliptical filtering using box splines.

    PubMed

    Chaudhury, Kunal Narayan; Munoz-Barrutia, Arrate; Unser, Michael

    2010-09-01

    The efficient realization of linear space-variant (non-convolution) filters is a challenging computational problem in image processing. In this paper, we demonstrate that it is possible to filter an image with a Gaussian-like elliptic window of varying size, elongation and orientation using a fixed number of computations per pixel. The associated algorithm, which is based upon a family of smooth compactly supported piecewise polynomials, the radially-uniform box splines, is realized using preintegration and local finite-differences. The radially-uniform box splines are constructed through the repeated convolution of a fixed number of box distributions, which have been suitably scaled and distributed radially in an uniform fashion. The attractive features of these box splines are their asymptotic behavior, their simple covariance structure, and their quasi-separability. They converge to Gaussians with the increase of their order, and are used to approximate anisotropic Gaussians of varying covariance simply by controlling the scales of the constituent box distributions. Based upon the second feature, we develop a technique for continuously controlling the size, elongation and orientation of these Gaussian-like functions. Finally, the quasi-separable structure, along with a certain scaling property of box distributions, is used to efficiently realize the associated space-variant elliptical filtering, which requires O(1) computations per pixel irrespective of the shape and size of the filter.

  7. Variability analysis of device-level photonics using stochastic collocation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xing, Yufei; Spina, Domenico; Li, Ang; Dhaene, Tom; Bogaerts, Wim

    2016-05-01

    Abstract Integrated photonics, and especially silicon photonics, has been rapidly expanded its catalog of building blocks and functionalities. Now, it is maturing fast towards circuit-level integration to serve more complex applications in industry. However, performance variability due to the fabrication process and operational conditions can limit the yield of large-scale circuits. It is essential to assess this impact at the design level with an efficient variability analysis: how variations in geometrical, electrical and optical parameters propagate into components performance. In particular when implementing wavelength-selective filters, many primary functional parameters are affected by fabrication-induced variability. The key functional parameters that we assess in this paper are the waveguide propagation constant (the effective index, essential to define the exact length of a delay line) and the coupling coefficients in coupling structure (necessary to set the power distribution over different delay lines). The Monte Carlo (MC) method is the standard method for variability analysis, thanks to its accuracy and easy implementation. However, due to its slow convergence, it requires a large set of samples (simulations or measurements), making it computationally or experimentally expensive. More efficient methods to assess such variability can be used, such as generalized polynomial chaos (gPC) expansion or stochastic collocation. In this paper, we demonstrate stochastic collocation (SC) as an efficient alternative to MC or gPC to characterize photonic devices under the effect of uncertainty. The idea of SC is to interpolate stochastic solutions in the random space by interpolation polynomials. After sampling the deterministic problem at a pre-defined set of nodes in random space, the interpolation is constructed. SC drastically reduces computation and measurement cost. Also, like MC method, sampling-based SC is easy to implement. Its computation cost can be

  8. Adaptation of a cubic smoothing spline algortihm for multi-channel data stitching at the National Ignition Facility

    SciTech Connect

    Brown, C; Adcock, A; Azevedo, S; Liebman, J; Bond, E

    2010-12-28

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.

  9. Adaptation of a cubic smoothing spline algorithm for multi-channel data stitching at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Brown, Charles G., Jr.; Adcock, Aaron B.; Azevedo, Stephen G.; Liebman, Judith A.; Bond, Essex J.

    2011-03-01

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, timevarying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.

  10. Separating Non-Linear Deformation And Atmospheric Phase Screen (APS) For INSAR Time Series Analysis Using Least-Square Collocation

    NASA Astrophysics Data System (ADS)

    Liu, S.; Hanssen, R. F.; Samiei-Esfahany, S.; Hooper, A.; Van Leijen, F. J.

    2012-01-01

    We present a new method for separating ground defor- mation from atmospheric phase screen (APS) based on PSInSAR. By stochastic modeling of ground deformation and APS via their variance-covariance functions we can not only estimate the signals with the best accuracy but also assess the estimation accuracy using least-squares collocation[5]. We evaluate the APS estimated by our method and the APS obtained from a commonly used window-based filtering method [6] by comparing them to repeat-pass interferograms over ground surfaces outside the subsiding region of Mexico City. The comparison shows that our method results in a better estimation of APS than the filtering method which ignores the temporal variability of APS variance. Our method is desired when there are temporal gaps in a SAR time series. In such a case, the filtering method needs a large temporal window to suppress APS, which may lead to leakage from ground deformation to APS.

  11. Fast and stable evaluation of box-splines via the BB-form

    NASA Astrophysics Data System (ADS)

    Kim, Minho; Peters, Jörg

    2009-04-01

    To repeatedly evaluate linear combinations of box-splines in a fast and stable way, in particular along knot planes, the box-spline is converted to and tabulated as piecewise polynomial in BB-form (Bernstein-Bézier-form). We show that the BB-coefficients can be derived and stored as integers plus a rational scale factor and derive a hash table for efficiently accessing the polynomial pieces. This pre-processing, the resulting evaluation algorithm and use in a widely-used ray-tracing package are illustrated for splines based on two trivariate box-splines: the seven-directional box-spline on the Cartesian lattice and the six-directional box-spline on the face-centered cubic lattice.

  12. An improved algorithm of three B-spline curve interpolation and simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Wanjun; Xu, Dongmei; Meng, Xinhong; Zhang, Feng

    2017-03-01

    As a key interpolation technique in CNC system machine tool, three B-spline curve interpolator has been proposed to change the drawbacks caused by linear and circular interpolator, Such as interpolation time bigger, three B-spline curves step error are not easy changed,and so on. This paper an improved algorithm of three B-spline curve interpolation and simulation is proposed. By Using MATALAB 7.0 computer soft in three B-spline curve interpolation is developed for verifying the proposed modification algorithm of three B-spline curve interpolation experimentally. The simulation results show that the algorithm is correct; it is consistent with a three B-spline curve interpolation requirements.

  13. Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.

    2015-12-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.

  14. Railroad inspection based on ACFM employing a non-uniform B-spline approach

    NASA Astrophysics Data System (ADS)

    Chacón Muñoz, J. M.; García Márquez, F. P.; Papaelias, M.

    2013-11-01

    The stresses sustained by rails have increased in recent years due to the use of higher train speeds and heavier axle loads. For this reason surface and near-surface defects generate by Rolling Contact Fatigue (RCF) have become particularly significant as they can cause unexpected structural failure of the rail, resulting in severe derailments. The accident that took place in Hatfield, UK (2000), is an example of a derailment caused by the structural failure of a rail section due to RCF. Early detection of RCF rail defects is therefore of paramount importance to the rail industry. The performance of existing ultrasonic and magnetic flux leakage techniques in detecting rail surface-breaking defects, such as head checks and gauge corner cracking, is inadequate during high-speed inspection, while eddy current sensors suffer from lift-off effects. The results obtained through rail inspection experiments under simulated conditions using Alternating Current Field Measurement (ACFM) probes, suggest that this technique can be applied for the accurate and reliable detection of surface-breaking defects at high inspection speeds. This paper presents the B-Spline approach used for the accurate filtering the noise of the raw ACFM signal obtained during high speed tests to improve the reliability of the measurements. A non-uniform B-spline approximation is employed to calculate the exact positions and the dimensions of the defects. This method generates a smooth approximation similar to the ACFM dataset points related to the rail surface-breaking defect.

  15. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation.

    PubMed

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  16. Generalized B-spline subdivision-surface wavelets for geometry compression.

    PubMed

    Bertram, Martin; Duchaineau, Mark A; Hamann, Bernd; Joy, Kenneth I

    2004-01-01

    We present a new construction of lifted biorthogonal wavelets on surfaces of arbitrary two-manifold topology for compression and multiresolution representation. Our method combines three approaches: subdivision surfaces of arbitrary topology, B-spline wavelets, and the lifting scheme for biorthogonal wavelet construction. The simple building blocks of our wavelet transform are local lifting operations performed on polygonal meshes with subdivision hierarchy. Starting with a coarse, irregular polyhedral base mesh, our transform creates a subdivision hierarchy of meshes converging to a smooth limit surface. At every subdivision level, geometric detail can be expanded from wavelet coefficients and added to the surface. We present wavelet constructions for bilinear, bicubic, and biquintic B-Spline subdivision. While the bilinear and bicubic constructions perform well in numerical experiments, the biquintic construction turns out to be unstable. For lossless compression, our transform can be computed in integer arithmetic, mapping integer coordinates of control points to integer wavelet coefficients. Our approach provides a highly efficient and progressive representation for complex geometries of arbitrary topology.

  17. Assessment of adequate quality and collocation of reference measurements with space-borne hyperspectral infrared instruments to validate retrievals of temperature and water vapour

    NASA Astrophysics Data System (ADS)

    Calbet, X.

    2016-01-01

    A method is presented to assess whether a given reference ground-based point observation, typically a radiosonde measurement, is adequately collocated and sufficiently representative of space-borne hyperspectral infrared instrument measurements. Once this assessment is made, the ground-based data can be used to validate and potentially calibrate, with a high degree of accuracy, the hyperspectral retrievals of temperature and water vapour.

  18. Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation

    NASA Astrophysics Data System (ADS)

    de Haan, Siebren

    2016-08-01

    Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.

  19. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    NASA Astrophysics Data System (ADS)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  20. A box spline calculus for the discretization of computed tomography reconstruction problems.

    PubMed

    Entezari, Alireza; Nilchian, Masih; Unser, Michael

    2012-08-01

    B-splines are attractive basis functions for the continuous-domain representation of biomedical images and volumes. In this paper, we prove that the extended family of box splines are closed under the Radon transform and derive explicit formulae for their transforms. Our results are general; they cover all known brands of compactly-supported box splines (tensor-product B-splines, separable or not) in any number of dimensions. The proposed box spline approach extends to non-Cartesian lattices used for discretizing the image space. In particular, we prove that the 2-D Radon transform of an N-direction box spline is generally a (nonuniform) polynomial spline of degree N-1. The proposed framework allows for a proper discretization of a variety of tomographic reconstruction problems in a box spline basis. It is of relevance for imaging modalities such as X-ray computed tomography and cryo-electron microscopy. We provide experimental results that demonstrate the practical advantages of the box spline formulation for improving the quality and efficiency of tomographic reconstruction algorithms.

  1. Image Quality Improvement in Adaptive Optics Scanning Laser Ophthalmoscopy Assisted Capillary Visualization Using B-spline-based Elastic Image Registration

    PubMed Central

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796

  2. Tensor splines for interpolation and approximation of DT-MRI with applications to segmentation of isolated rat hippocampi.

    PubMed

    Barmpoutis, Angelos; Vemuri, Baba C; Shepherd, Timothy M; Forder, John R

    2007-11-01

    In this paper, we present novel algorithms for statistically robust interpolation and approximation of diffusion tensors-which are symmetric positive definite (SPD) matrices-and use them in developing a significant extension to an existing probabilistic algorithm for scalar field segmentation, in order to segment diffusion tensor magnetic resonance imaging (DT-MRI) datasets. Using the Riemannian metric on the space of SPD matrices, we present a novel and robust higher order (cubic) continuous tensor product of B-splines algorithm to approximate the SPD diffusion tensor fields. The resulting approximations are appropriately dubbed tensor splines. Next, we segment the diffusion tensor field by jointly estimating the label (assigned to each voxel) field, which is modeled by a Gauss Markov measure field (GMMF) and the parameters of each smooth tensor spline model representing the labeled regions. Results of interpolation, approximation, and segmentation are presented for synthetic data and real diffusion tensor fields from an isolated rat hippocampus, along with validation. We also present comparisons of our algorithms with existing methods and show significantly improved results in the presence of noise as well as outliers.

  3. Optimal aeroassisted orbital transfer with plane change using collocation and nonlinear programming

    NASA Technical Reports Server (NTRS)

    Shi, Yun. Y.; Nelson, R. L.; Young, D. H.

    1990-01-01

    The fuel optimal control problem arising in the non-planar orbital transfer employing aeroassisted technology is addressed. The mission involves the transfer from high energy orbit (HEO) to low energy orbit (LEO) with orbital plane change. The basic strategy here is to employ a combination of propulsive maneuvers in space and aerodynamic maneuvers in the atmosphere. The basic sequence of events for the aeroassisted HEO to LEO transfer consists of three phases. In the first phase, the orbital transfer begins with a deorbit impulse at HEO which injects the vehicle into an elliptic transfer orbit with perigee inside the atmosphere. In the second phase, the vehicle is optimally controlled by lift and bank angle modulations to perform the desired orbital plane change and to satisfy heating constraints. Because of the energy loss during the turn, an impulse is required to initiate the third phase to boost the vehicle back to the desired LEO orbital altitude. The third impulse is then used to circularize the orbit at LEO. The problem is solved by a direct optimization technique which uses piecewise polynomial representation for the state and control variables and collocation to satisfy the differential equations. This technique converts the optimal control problem into a nonlinear programming problem which is solved numerically. Solutions were obtained for cases with and without heat constraints and for cases of different orbital inclination changes. The method appears to be more powerful and robust than other optimization methods. In addition, the method can handle complex dynamical constraints.

  4. Semi-cardinal interpolation and difference equations: From cubic B-splines to a three-direction box-spline construction

    NASA Astrophysics Data System (ADS)

    Bejancu, Aurelian

    2006-12-01

    This paper considers the problem of interpolation on a semi-plane grid from a space of box-splines on the three-direction mesh. Building on a new treatment of univariate semi-cardinal interpolation for natural cubic splines, the solution is obtained as a Lagrange series with suitable localization and polynomial reproduction properties. It is proved that the extension of the natural boundary conditions to box-spline semi-cardinal interpolation attains half of the approximation order of the cardinal case.

  5. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been

  6. Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.

    PubMed

    Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei

    2014-02-01

    Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.

  7. Thin-plate spline analysis of mandibular growth.

    PubMed

    Franchi, L; Baccetti, T; McNamara, J A

    2001-04-01

    The analysis of mandibular growth changes around the pubertal spurt in humans has several important implications for the diagnosis and orthopedic correction of skeletal disharmonies. The purpose of this study was to evaluate mandibular shape and size growth changes around the pubertal spurt in a longitudinal sample of subjects with normal occlusion by means of an appropriate morphometric technique (thin-plate spline analysis). Ten mandibular landmarks were identified on lateral cephalograms of 29 subjects at 6 different developmental phases. The 6 phases corresponded to 6 different maturational stages in cervical vertebrae during accelerative and decelerative phases of the pubertal growth curve of the mandible. Differences in shape between average mandibular configurations at the 6 developmental stages were visualized by means of thin-plate spline analysis and subjected to permutation test. Centroid size was used as the measure of the geometric size of each mandibular specimen. Differences in size at the 6 developmental phases were tested statistically. The results of graphical analysis indicated a statistically significant change in mandibular shape only for the growth interval from stage 3 to stage 4 in cervical vertebral maturation. Significant increases in centroid size were found at all developmental phases, with evidence of a prepubertal minimum and of a pubertal maximum. The existence of a pubertal peak in human mandibular growth, therefore, is confirmed by thin-plate spline analysis. Significant morphological changes in the mandible during the growth interval from stage 3 to stage 4 in cervical vertebral maturation may be described as an upward-forward direction of condylar growth determining an overall "shrinkage" of the mandibular configuration along the measurement of total mandibular length. This biological mechanism is particularly efficient in compensating for major increments in mandibular size at the adolescent spurt.

  8. A Bayesian-optimized spline representation of the electrocardiogram.

    PubMed

    Guilak, F G; McNames, J

    2013-11-01

    We introduce an implementation of a novel spline framework for parametrically representing electrocardiogram (ECG) waveforms. This implementation enables a flexible means to study ECG structure in large databases. Our algorithm allows researchers to identify key points in the waveform and optimally locate them in long-term recordings with minimal manual effort, thereby permitting analysis of trends in the points themselves or in metrics derived from their locations. In the work described here we estimate the location of a number of commonly-used characteristic points of the ECG signal, defined as the onsets, peaks, and offsets of the P, QRS, T, and R' waves. The algorithm applies Bayesian optimization to a linear spline representation of the ECG waveform. The location of the knots-which are the endpoints of the piecewise linear segments used in the spline representation of the signal-serve as the estimate of the waveform's characteristic points. We obtained prior information of knot times, amplitudes, and curvature from a large manually-annotated training dataset and used the priors to optimize a Bayesian figure of merit based on estimated knot locations. In cases where morphologies vary or are subject to noise, the algorithm relies more heavily on the estimated priors for its estimate of knot locations. We compared optimized knot locations from our algorithm to two sets of manual annotations on a prospective test data set comprising 200 beats from 20 subjects not in the training set. Mean errors of characteristic point locations were less than four milliseconds, and standard deviations of errors compared favorably against reference values. This framework can easily be adapted to include additional points of interest in the ECG signal or for other biomedical detection problems on quasi-periodic signals.

  9. A numerical optimization approach to generate smoothing spherical splines

    NASA Astrophysics Data System (ADS)

    Machado, L.; Monteiro, M. Teresa T.

    2017-01-01

    Approximating data in curved spaces is a common procedure that is extremely required by modern applications arising, for instance, in aerospace and robotics industries. Here, we are particularly interested in finding smoothing cubic splines that best fit given data in the Euclidean sphere. To achieve this aim, a least squares optimization problem based on the minimization of a certain cost functional is formulated. To solve the problem a numerical algorithm is implemented using several routines from MATLAB toolboxes. The proposed algorithm is shown to be easy to implement, very accurate and precise for spherical data chosen randomly.

  10. Use of tensor product splines in magnet optimization

    SciTech Connect

    Davey, K.R. )

    1999-05-01

    Variational Metrics and other direct search techniques have proved useful in magnetic optimization. At least one technique used in magnetic optimization is to first fit the data of the desired optimization parameter to the data. If this fit is smoothly differentiable, a number of powerful techniques become available for the optimization. The author shows the usefulness of tensor product splines in accomplishing this end. Proper choice of augmented knot placement not only makes the fit very accurate, but allows for differentiation. Thus the gradients required with direct optimization in divariate and trivariate applications are robustly generated.

  11. An Executive System for Modeling with Rational B-Splines

    DTIC Science & Technology

    1989-05-01

    capabilities; and, isophote line calculation Curve modules interfaced include entering and editing points in the parametric space of a B-spline surface...5.2.3.3.7 ISOPHOTES ........................ 81 5.2.3.3.7.1 SET NUMBER .................. 81 5.2.3.3.7.2 READ ISOPHOTE ............... 81 5.2.3.3.7.3...CALCULATE ISOPHOTES .......... 81 5.2.3.3.7.4 SHOW ISOPHOTES ................ 81 5.2.3.3.8 REFLECTION LINES ................. 82 5.2.3.3.8.1 SET

  12. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  13. Full-turn symplectic map from a generator in a Fourier-spline basis

    SciTech Connect

    Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.

    1993-04-01

    Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton`s method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10{sup 7} turns on an IBM RS6000 model 320 workstation, in a run of about one day.

  14. A spectral collocation time-domain solver for Maxwell's equations of electromagnetics with application to radar cross-section computation

    NASA Astrophysics Data System (ADS)

    Kabakian, Adour Vahe

    1998-12-01

    Most time-domain solvers of Maxwell's equations that are applied to electromagnetic wave scattering problems are based on second- or third-order finite-difference and finite-volume schemes. Since linear wave propagation phenomena tend to be very susceptible to numerical dissipation and dispersion errors, they place high accuracy demands on the numerical methods employed. Starting with the premise that the required accuracy can be achieved more efficiently with high-order methods, a new numerical scheme based on spectral collocation is developed for solving Maxwell's equations in the time domain. The three-dimensional method is formulated over generalized curvilinear coordinates. It employs Fourier and Chebyshev spectral collocation for the spatial derivatives, while time advancement is achieved by the explicit third-order Adams-Moulton-Bashforth scheme. A domain decomposition method supplementing the spectral solver is also developed, extending its range of applications to geometries more complex than those traditionally associated with spectral methods. Reflective and absorbing boundary conditions are developed specifically for the spectral scheme. Finally, a grid stretching function is incorporated into the solver, which can be used, when needed, to relieve the stability restriction associated with the Chebyshev spacing of the collocation points, at the expense of only moderate loss in accuracy. The numerical method is applied to solve electromagnetic wave scattering problems from perfectly conducting solid targets, using both single and multi-domain grids. The geometries considered are the circular cylinder, the square cylinder, and the sphere. Solutions are evaluated and validated by the accuracy of the radar cross-section and, in some instances, the surface currents. Compared to commonly used finite-difference and finite-volume solvers, the spectral scheme produces results that are one to two orders of magnitude more accurate, using grids that are an order of

  15. A few remarks on recurrence relations for geometrically continuous piecewise Chebyshevian B-splines

    NASA Astrophysics Data System (ADS)

    Mazure, Marie-Laurence

    2009-08-01

    This works complements a recent article (Mazure, J. Comp. Appl. Math. 219(2):457-470, 2008) in which we showed that T. Lyche's recurrence relations for Chebyshevian B-splines (Lyche, Constr. Approx. 1:155-178, 1985) naturally emerged from blossoms and their properties via de Boor type algorithms. Based on Chebyshevian divided differences, T. Lyche's approach concerned splines with all sections in the same Chebyshev space and with ordinary connections at the knots. Here, we consider geometrically continuous piecewise Chebyshevian splines, namely, splines with sections in different Chebyshev spaces, and with geometric connections at the knots. In this general framework, we proved in (Mazure, Constr. Approx. 20:603-624, 2004) that existence of B-spline bases could not be separated from existence of blossoms. Actually, the present paper enhances the powerfulness of blossoms in which not only B-splines are inherent, but also their recurrence relations. We compare this fact with the work by G. Mühlbach and Y. Tang (Mühlbach and Tang, Num. Alg. 41:35-78, 2006) who obtained the same recurrence relations via generalised Chebyshevian divided differences, but only under some total positivity assumption on the connexion matrices. We illustrate this comparison with splines with four-dimensional sections. The general situation addressed here also enhances the differences of behaviour between B-splines and the functions of smaller and smaller supports involved in the recurrence relations.

  16. Application of the operator spline technique to nonlinear estimation and control of moving elastic systems

    NASA Technical Reports Server (NTRS)

    Karray, Fakhreddine; Dwyer, Thomas A. W., III

    1990-01-01

    A bilinear model of the vibrational dynamics of a deformable maneuvering body is described. Estimates of the deformation state are generated through a low dimensional operator spline interpolator of bilinear systems combined with a feedback linearized based observer. Upper bounds on error estimates are also generated through the operator spline, and potential application to shaping control purposes is highlighted.

  17. Vibration suppression in cutting tools using collocated piezoelectric sensors/actuators with an adaptive control algorithm

    SciTech Connect

    Radecki, Peter P; Farinholt, Kevin M; Park, Gyuhae; Bement, Matthew T

    2008-01-01

    The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.

  18. Computational Framework for a Fully-Coupled, Collocated-Arrangement Flow Solver Applicable at all Speeds

    NASA Astrophysics Data System (ADS)

    Xiao, Cheng-Nian; Denner, Fabian; van Wachem, Berend

    2015-11-01

    A pressure-based Navier-Stokes solver which is applicable to fluid flow problems of a wide range of speeds is presented. The novel solver is based on collocated variable arrangement and uses a modified Rhie-Chow interpolation method to assure implicit pressure-velocity coupling. A Mach number biased modification to the continuity equation as well as coupling of flow and thermodynamic variables via an energy equation and equation of state enable the simulation of compressible flows belonging to transonic or supersonic Mach number regimes. The flow equation systems are all solved simultaneously, thus guaranteeing strong coupling between pressure and velocity at each iteration step. Shock-capturing is accomplished via nonlinear spatial discretisation schemes which adaptively apply an appropriate blending of first-order upwind and second-order central schemes depending on the local smoothness of the flow field. A selection of standard test problems will be presented to demonstrate the solver's capability of handling incompressible as well as compressible flow fields of vastly different speed regimes on structured as well as unstructured meshes. The authors are grateful for the financial support of Shell.

  19. Miniaturized Multi-Band Antenna via Element Collocation

    SciTech Connect

    Martin, R P

    2012-06-01

    The resonant frequency of a microstrip patch antenna may be reduced through the addition of slots in the radiating element. Expanding upon this concept in favor of a significant reduction in the tuned width of the radiator, nearly 60% of the antenna metallization is removed, as seen in the top view of the antenna’s radiating element (shown in red, below, left). To facilitate an increase in the gain of the antenna, the radiator is suspended over the ground plane (green) by an air substrate at a height of 0.250" while being mechanically supported by 0.030" thick Rogers RO4003 laminate in the same profile as the element. Although the entire surface of the antenna (red) provides 2.45 GHz operation with insignificant negative effects on performance after material removal, the smaller square microstrip in the middle must be isolated from the additional aperture in order to afford higher frequency operation. A low insertion loss path centered at 2.45 GHz may simultaneously provide considerable attenuation at additional frequencies through the implementation of a series-parallel, resonant reactive path. However, an inductive reactance alone will not permit lower frequency energy to propagate across the intended discontinuity. To mitigate this, a capacitance is introduced in series with the inductor, generating a resonance at 2.45 GHz with minimum forward transmission loss. Four of these reactive pairs are placed between the coplanar elements as shown. Therefore, the aperture of the lower-frequency outer segment includes the smaller radiator while the higher frequency section is isolated from the additional material. In order to avoid cross-polarization losses due to the orientation of a transmitter or receiver in reference to the antenna, circular polarization is realized by a quadrature coupler for each collocated antenna as seen in the bottom view of the antenna (right). To generate electromagnetic radiation concentrically rotating about the direction of propagation

  20. Reconstruction of irregularly-sampled volumetric data in efficient box spline spaces.

    PubMed

    Xu, Xie; Alvarado, Alexander Singh; Entezari, Alireza

    2012-07-01

    We present a variational framework for the reconstruction of irregularly-sampled volumetric data in, nontensor-product, spline spaces. Motivated by the sampling-theoretic advantages of body centered cubic (BCC) lattice, this paper examines the BCC lattice and its associated box spline spaces in a variational setting. We introduce a regularization scheme for box splines that allows us to utilize the BCC lattice in a variational reconstruction framework. We demonstrate that by choosing the BCC lattice over the commonly-used Cartesian lattice, as the shift-invariant representation, one can increase the quality of signal reconstruction. Moreover, the computational cost of the reconstruction process is reduced in the BCC framework due to the smaller bandwidth of the system matrix in the box spline space compared to the corresponding tensor-product B-spline space. The improvements in accuracy are quantified numerically and visualized in our experiments with synthetic as well as real biomedical datasets.