Science.gov

Sample records for spline collocation method

  1. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  2. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    SciTech Connect

    Kim, Sang Dong

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  3. Quintic B-spline collocation method for second order mixed boundary value problem

    NASA Astrophysics Data System (ADS)

    Lang, Feng-Gong; Xu, Xiao-Ping

    2012-04-01

    In this paper, we study a new quintic B-spline collocation method for linear and nonlinear second order mixed boundary value problems. Convergence is studied and the method is fourth order convergent. Numerical examples are also given to demonstrate the higher accuracy and efficiency of our method.

  4. Parameter estimation technique for boundary value problems by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1988-01-01

    A parameter-estimation technique for boundary-integral equations of the second kind is developed. The output least-squares identification technique using the spline collocation method is considered. The convergence analysis for the numerical method is discussed. The results are applied to boundary parameter estimations for two-dimensional Laplace and Helmholtz equations.

  5. Exponential B-spline collocation method for numerical solution of the generalized regularized long wave equation

    NASA Astrophysics Data System (ADS)

    Reza, Mohammadi

    2015-05-01

    The aim of the present paper is to present a numerical algorithm for the time-dependent generalized regularized long wave equation with boundary conditions. We semi-discretize the continuous problem by means of the Crank-Nicolson finite difference method in the temporal direction and exponential B-spline collocation method in the spatial direction. The method is shown to be unconditionally stable. It is shown that the method is convergent with an order of . Our scheme leads to a tri-diagonal nonlinear system. This new method has lower computational cost in comparison to the Sinc-collocation method. Finally, numerical examples demonstrate the stability and accuracy of this method.

  6. Quintic B-spline collocation method for numerical solution of the Kuramoto-Sivashinsky equation

    NASA Astrophysics Data System (ADS)

    Mittal, R. C.; Arora, Geeta

    2010-10-01

    In this paper, the quintic B-spline collocation scheme is implemented to find numerical solution of the Kuramoto-Sivashinsky equation. The scheme is based on the Crank-Nicolson formulation for time integration and quintic B-spline functions for space integration. The accuracy of the proposed method is demonstrated by four test problems. The numerical results are found to be in good agreement with the exact solutions. Results are also shown graphically and are compared with results given in the literature.

  7. The application of cubic B-spline collocation method in impact force identification

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Chen, Xuefeng; Xue, Xiaofeng; Luo, Xinjie; Liu, Ruonan

    2015-12-01

    The accurate real-time characterization of impact event is vital during the life-time of a mechanical product. However, the identified impact force may seriously diverge from the real one due to the unknown noise contaminating the measured data, as well as the ill-conditioned system matrix. In this paper, a regularized cubic B-spline collocation (CBSC) method is developed for identifying the impact force time history, which overcomes the deficiency of the ill-posed problem. The cubic B-spline function by controlling the mesh size of the collocation point has the profile of a typical impact event. The unknown impact force is approximated by a set of translated cubic B-spline functions and then the original governing equation of force identification is reduced to find the coefficient of the basis function at each collocation point. Moreover, a modified regularization parameter selection criterion derived from the generalized cross validation (GCV) criterion for the truncated singular value decomposition (TSVD) is introduced for the CBSC method to determine the optimum regularization number of cubic B-spline functions. In the numerical simulation of a two degrees-of-freedom (DOF) system, the regularized CBSC method is validated under different noise levels and frequency bands of exciting forces. Finally, an impact experiment is performed on a clamped-free shell structure to confirm the performance of the regularized CBSC method. Experimental results demonstrate that the peak relative errors of impact forces based on the regularized CBSC method are below 8%, while those based on the TSVD method are approximately 30%.

  8. Numerical solution of fractional differential equations using cubic B-spline wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Li, Xinxiu

    2012-10-01

    Physical processes with memory and hereditary properties can be best described by fractional differential equations due to the memory effect of fractional derivatives. For that reason reliable and efficient techniques for the solution of fractional differential equations are needed. Our aim is to generalize the wavelet collocation method to fractional differential equations using cubic B-spline wavelet. Analytical expressions of fractional derivatives in Caputo sense for cubic B-spline functions are presented. The main characteristic of the approach is that it converts such problems into a system of algebraic equations which is suitable for computer programming. It not only simplifies the problem but also speeds up the computation. Numerical results demonstrate the validity and applicability of the method to solve fractional differential equation.

  9. A collocation method with cubic B-splines for solving the MRLW equation

    NASA Astrophysics Data System (ADS)

    Khalifa, A. K.; Raslan, K. R.; Alzubaidi, H. M.

    2008-03-01

    The modified regularized long wave (MRLW) equation is solved numerically by collocation method using cubic B-splines finite element. A linear stability analysis of the scheme is shown to be marginally stable. Three invariants of motion are evaluated to determine the conservation properties of the algorithm, also the numerical scheme leads to accurate and efficient results. Moreover, interaction of two and three solitary waves are studied through computer simulation and the development of the Maxwellian initial condition into solitary waves is also shown.

  10. Solitary wave solutions of the CMKdV equation by using the quintic B-spline collocation method

    NASA Astrophysics Data System (ADS)

    Irk, Dursun; Dağ, İdris

    2008-06-01

    In this paper, the method based on the collocation method with quintic B-spline finite elements is set up to simulate the solitary wave solution of the complex modified Korteweg-de Vries (CMKdV) equation. The Crank-Nicolson central differencing scheme has been used for the time integration and quintic B-spline functions have been used for the space integration. Propagation of the solitary wave and the interaction of two solitary waves are studied.

  11. An ADI extrapolated Crank-Nicolson orthogonal spline collocation method for nonlinear reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Fernandes, Ryan I.; Fairweather, Graeme

    2012-08-01

    An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.

  12. Quartic B-spline collocation method applied to Korteweg de Vries equation

    NASA Astrophysics Data System (ADS)

    Zin, Shazalina Mat; Majid, Ahmad Abd; Ismail, Ahmad Izani Md

    2014-07-01

    The Korteweg de Vries (KdV) equation is known as a mathematical model of shallow water waves. The general form of this equation is ut+ɛuux+μuxxx = 0 where u(x,t) describes the elongation of the wave at displacement x and time t. In this work, one-soliton solution for KdV equation has been obtained numerically using quartic B-spline collocation method for displacement x and using finite difference approach for time t. Two problems have been identified to be solved. Approximate solutions and errors for these two test problems were obtained for different values of t. In order to look into accuracy of the method, L2-norm and L∞-norm have been calculated. Mass, energy and momentum of KdV equation have also been calculated. The results obtained show the present method can approximate the solution very well, but as time increases, L2-norm and L∞-norm are also increase.

  13. An ADI extrapolated Crank-Nicolson orthogonal spline collocation method for nonlinear reaction-diffusion systems on evolving domains

    NASA Astrophysics Data System (ADS)

    Fernandes, Ryan I.; Bialecki, Bernard; Fairweather, Graeme

    2015-10-01

    We consider the approximate solution of nonlinear reaction-diffusion systems on evolving domains that arise in a variety of areas including biology, chemistry, ecology and physics. By mapping a fixed domain onto the evolving domain at each time level, we generalize to evolving domains the ADI extrapolated Crank-Nicolson orthogonal spline collocation technique developed in [8,9] for fixed domains. The new method is tested on the Schnakenberg model and we demonstrate numerically that it preserves the second-order accuracy in time and optimal accuracy in space for piecewise Hermite cubics in various norms. Moreover, the efficacy of the method is demonstrated on several test problems from the literature which involve various types of domain evolution but for which exact solutions are not known.

  14. Numerical solutions of the reaction diffusion system by using exponential cubic B-spline collocation algorithms

    NASA Astrophysics Data System (ADS)

    Ersoy, Ozlem; Dag, Idris

    2015-12-01

    The solutions of the reaction-diffusion system are given by method of collocation based on the exponential B-splines. Thus the reaction-diffusion systemturns into an iterative banded algebraic matrix equation. Solution of the matrix equation is carried out byway of Thomas algorithm. The present methods test on both linear and nonlinear problems. The results are documented to compare with some earlier studies by use of L? and relative error norm for problems respectively.

  15. A fourth order spline collocation approach for a business cycle model

    NASA Astrophysics Data System (ADS)

    Sayfy, A.; Khoury, S.; Ibdah, H.

    2013-10-01

    A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.

  16. Sextic B-spline collocation algorithm for the modified equal width equation

    NASA Astrophysics Data System (ADS)

    Hassan, Saleh M.; Alamery, D. G.

    2011-11-01

    Sextic B-spline collocation algorithm based on Runge-Kutta fourth order method has been developed for solving numerically the modified equal width wave equation (MEW). The conservation properties of migration and interaction of solitary waves have been investigated. A Maxwellian initial condition pulse is also studied. This algorithm not only reduces and simplifies the computations but also results in much more accurate results.

  17. Spectral collocation methods

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.

    1989-01-01

    This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.

  18. Spectral collocation methods

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.

    1987-01-01

    This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.

  19. On the stability of an implicit spline collocation difference scheme for linear partial differential algebraic equations

    NASA Astrophysics Data System (ADS)

    Gaidomak, S. V.

    2013-09-01

    A boundary value problem for linear partial differential algebraic systems of equations with multiple characteristic curves is examined. It is assumed that the pencil of matrix functions associated with this system is smoothly equivalent to a special canonic form. The spline collocation is used to construct for this problem a difference scheme of an arbitrary approximation order with respect to each independent variable. Sufficient conditions are found for this scheme to be absolutely stable.

  20. A B-Spline Method for Solving the Navier Stokes Equations

    SciTech Connect

    Johnson, Richard Wayne

    2005-01-01

    Collocation methods using piece-wise polynomials, including B-splines, have been developed to find approximate solutions to both ordinary and partial differential equations. Such methods are elegant in their simplicity and efficient in their application. The spline collocation method is typically more efficient than traditional Galerkin finite element methods, which are used to solve the equations of fluid dynamics. The collocation method avoids integration. Exact formulae are available to find derivatives on spline curves and surfaces. The primary objective of the present work is to determine the requirements for the successful application of B-spline collocation to solve the coupled, steady, 2D, incompressible Navier–Stokes and continuity equations for laminar flow. The successful application of B-spline collocation included the development of ad hoc method dubbed the Boundary Residual method to deal with the presence of the pressure terms in the Navier–Stokes equations. Historically, other ad hoc methods have been developed to solve the incompressible Navier–Stokes equations, including the artificial compressibility, pressure correction and penalty methods. Convergence studies show that the ad hoc Boundary Residual method is convergent toward an exact (manufactured) solution for the 2D, steady, incompressible Navier–Stokes and continuity equations. C1 cubic and quartic B-spline schemes employing orthogonal collocation and C2 cubic and C3 quartic B-spline schemes with collocation at the Greville points are investigated. The C3 quartic Greville scheme is shown to be the most efficient scheme for a given accuracy, even though the C1 quartic orthogonal scheme is the most accurate for a given partition. Two solution approaches are employed, including a globally-convergent zero-finding Newton's method using an LU decomposition direct solver and the variable-metric minimization method using BFGS update.

  1. Numerical Methods Using B-Splines

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Merriam, Marshal (Technical Monitor)

    1997-01-01

    The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.

  2. ABCs and fourth-order spline collocation for the solution of two-point boundary value problems over an infinite domain

    NASA Astrophysics Data System (ADS)

    Khoury, S.; Ibdah, H.; Sayfy, A.

    2013-10-01

    A mixed approach, based on cubic B-spline collocation and asymptotic boundary conditions (ABCs), is presented for the numerical solution of an extended class of two-point linear boundary value problems (BVPs) over an infinite interval as well as a system of BVPs. The condition at infinity is reduced to an asymptotic boundary condition that approaches the required value at infinity over a large finite interval. The resulting problem is handled using an adaptive spline collocation approach constructed over uniform meshes. The rate of convergence is verified numerically to be of fourth-order. The efficiency and applicability of the method are demonstrated by applying the strategy to a number of examples. The numerical solutions are compared with existing analytical solutions.

  3. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  4. A multilevel stochastic collocation method for SPDEs

    SciTech Connect

    Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton

    2015-03-10

    We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.

  5. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  6. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  7. Numerical solution of complex modified Korteweg-de Vries equation by collocation method

    NASA Astrophysics Data System (ADS)

    Ismail, M. S.

    2009-03-01

    The collocation method using quintic B-spline is derived for solving the complex modified Korteweg-de Vries (CMKdV). The method is based on Crank-Nicolson formulation for time integration and quintic B-spline functions for space integration. The von Neumann stability is used to prove that the scheme is unconditionally stable. Newton's method is used to solve the nonlinear block pentadiagonal system obtained. Numerical tests for single, two, and three solitons are used to assess the performance of the proposed scheme.

  8. A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics

    SciTech Connect

    M. D. Landon; R. W. Johnson

    1999-07-01

    The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve complex curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.

  9. A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics

    SciTech Connect

    Johnson, Richard Wayne; Landon, Mark Dee

    1999-07-01

    The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.

  10. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  11. Isogeometric methods for computational electromagnetics: B-spline and T-spline discretizations

    NASA Astrophysics Data System (ADS)

    Buffa, A.; Sangalli, G.; Vázquez, R.

    2014-01-01

    In this paper we introduce methods for electromagnetic wave propagation, based on splines and on T-splines. We define spline spaces which form a De Rham complex and following the isogeometric paradigm, we map them on domains which are (piecewise) spline or NURBS geometries. We analyze their geometric and topological structure, as related to the connectivity of the underlying mesh, and we present degrees of freedom together with their physical interpretation. The theory is then extended to the case of meshes with T-junctions, leveraging on the recent theory of T-splines. The use of T-splines enhance our spline methods with local refinement capability and numerical tests show the efficiency and the accuracy of the techniques we propose.

  12. A tensor product B-spline method for numerical grid generation

    SciTech Connect

    Manke, J.

    1989-10-01

    We present a tensor product B-spline method for fast elliptic grid generation. The Cartesian coordinate functions for a block are represented as a sum of tensor product B-spline basis functions defined on the computational domain for the block. The tensor product B-spline basis functions are constructed so that the basis functions and their first partials are continuous on the computational domain for the block. The coordinate functions inherit this smoothness: a grid computed by evaluating the coordinate function along constant parameter lines leads to smooth grid lines with smoothly varying tangents. The expansion coefficients for the coordinates functions are computed by solving the elliptic grid generation equations using collocation. This assures that the computer grid has the smoothness and resolution expected for an elliptic grid with appropriate control. The collocation equations are solved with an ADI solution algorithm analogous to the ADI solution algorithm for the finite difference method. The speed of the method derives from the smoothness of the B-spline basis functions; in effect, a fine grid in the physical domain is obtained by constructing a smooth expansion of the coordinate functions on a coarse grid of knots in the computational domain. Thus, the tensor product B-spline method will be faster than the finite difference method, if a sufficiently smooth fine grid in the physical domain can be obtained for an appropriately coarse grid of knots in the computational domain.

  13. A Semi-Implicit, Fourier-Galerkin/B-Spline Collocation Approach for DNS of Compressible, Reacting, Wall-Bounded Flow

    NASA Astrophysics Data System (ADS)

    Oliver, Todd; Ulerich, Rhys; Topalian, Victor; Malaya, Nick; Moser, Robert

    2013-11-01

    A discretization of the Navier-Stokes equations appropriate for efficient DNS of compressible, reacting, wall-bounded flows is developed and applied. The spatial discretization uses a Fourier-Galerkin/B-spline collocation approach. Because of the algebraic complexity of the constitutive models involved, a flux-based approach is used where the viscous terms are evaluated using repeated application of the first derivative operator. In such an approach, a filter is required to achieve appropriate dissipation at high wavenumbers. We formulate a new filter source operator based on the viscous operator. Temporal discretization is achieved using the SMR91 hybrid implicit/explicit scheme. The linear implicit operator is chosen to eliminate wall-normal acoustics from the CFL constraint while also decoupling the species equations from the remaining flow equations, which minimizes the cost of the required linear algebra. Results will be shown for a mildly supersonic, multispecies boundary layer case inspired by the flow over the ablating surface of a space capsule entering Earth's atmosphere. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  14. Aerodynamic influence coefficient method using singularity splines

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1974-01-01

    A numerical lifting surface formulation, including computed results for planar wing cases is presented. This formulation, referred to as the vortex spline scheme, combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of loading function methods. The formulation employes a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfied all the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise. Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral. The current formulation uses the elementary horseshoe vortex as the basic singularity and is therefore restricted to linearized potential flow. As part of the study, a non planar development was considered, but the numerical evaluation of the lifting surface concept was restricted to planar configurations. Also, a second order sideslip analysis based on an asymptotic expansion was investigated using the singularity spline formulation.

  15. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  16. Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation

    SciTech Connect

    Ismail, M. S.

    2010-09-30

    The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.

  17. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  18. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  19. Higher-order numerical solutions using cubic splines

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.

  20. Collocation and Least Residuals Method and Its Applications

    NASA Astrophysics Data System (ADS)

    Shapeev, Vasily

    2016-02-01

    The collocation and least residuals (CLR) method combines the methods of collocations (CM) and least residuals. Unlike the CM, in the CLR method an approximate solution of the problem is found from an overdetermined system of linear algebraic equations (SLAE). The solution of this system is sought under the requirement of minimizing a functional involving the residuals of all its equations. On the one hand, this added complication of the numerical algorithm expands the capabilities of the CM for solving boundary value problems with singularities. On the other hand, the CLR method inherits to a considerable extent some convenient features of the CM. In the present paper, the CLR capabilities are illustrated on benchmark problems for 2D and 3D Navier-Stokes equations, the modeling of the laser welding of metal plates of similar and different metals, problems investigating strength of loaded parts made of composite materials, boundary-value problems for hyperbolic equations.

  1. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  2. Pseudospectral collocation methods for fourth order differential equations

    NASA Technical Reports Server (NTRS)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  3. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  4. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  5. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  6. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  7. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  8. Multi-element probabilistic collocation method in high dimensions

    SciTech Connect

    Foo, Jasmine; Karniadakis, George Em

    2010-03-01

    We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order {mu} and the effective dimension {nu}, with {nu}<=}{mu} for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

  9. Multi-element probabilistic collocation method in high dimensions

    NASA Astrophysics Data System (ADS)

    Foo, Jasmine; Karniadakis, George Em

    2010-03-01

    We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM [1] and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order ? and the effective dimension ?, with ??N, and N the nominal dimension. Numerical tests for multi-dimensional integration and for stochastic elliptic problems suggest that ??? for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

  10. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  11. Three-dimensional distorted black holes: Using the Galerkin-Collocation method

    NASA Astrophysics Data System (ADS)

    de Oliveira, H. P.; Rodrigues, E. L.

    2014-12-01

    We present an implementation of the Galerkin-Collocation method to determine the initial data for nonrotating distorted three-dimensional black holes in the inversion and puncture schemes. The numerical method combines the key features of the Galerkin and Collocation methods which produces accurate initial data. We evaluated the ADM mass of the initial data sets, and we have provided the angular structure of the gravitational wave distribution at the initial hypersurface by evaluating the scalar Ψ4 for asymptotic observers.

  12. A B-Spline Modal Method in Comparison to the Fourier Modal Method

    NASA Astrophysics Data System (ADS)

    Walz, Michael; Zebrowski, Thomas; Küchenmeister, Jens; Busch, Kurt

    2011-10-01

    The Fourier modal method (FMM) is a common tool when transmittance or reflectance spectra of periodic structures are needed. In this work, we investigate an approach one could call the B-spline modal method (BMM). We use an S-matrix algorithm for connecting different layers but instead of a Fourier basis we use B-splines to solve Maxwell's equations in each layer. The advantage of B-splines compared to plane waves is that they can represent discontinuities accurately. These discontinuities (naturally arising at interfaces between different materials) are problematic for the FMM since a finite Fourier series will always be smooth and one has to deal with Gibbs' phenomenon. In this work, we present a comparison of the convergence behavior between FMM and BMM.

  13. A multidomain spectral collocation method for the Stokes problem

    NASA Technical Reports Server (NTRS)

    Landriani, G. Sacchi; Vandeven, H.

    1989-01-01

    A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.

  14. Numerical solution of first order initial value problem using quartic spline method

    NASA Astrophysics Data System (ADS)

    Ala'yed, Osama; Ying, Teh Yuan; Saaban, Azizan

    2015-12-01

    Any first order initial value problem can be integrated numerically by discretizing the interval of integration into a number of subintervals, either with equally distributed grid points or non-equally distributed grid points. Hence, as the integration advances, the numerical solutions at the grid points are calculated and being known. However, the numerical solutions between the grid points remain unknown. This will form difficulty to individuals who wish to study a particular solution which may not fall on the grid points. Therefore, some sorts of interpolation techniques are needed to deal with such difficulty. Spline interpolation technique remains as a well known approach to approximate the numerical solution of a first order initial value problem, not only at the grid points but also everywhere between the grid points. In this short article, a new quartic spline method has been derived to obtain the numerical solution for first order initial value problem. The key idea of the derivation is to treat the third derivative of the quartic spline function to be a linear polynomial, and obtain the quartic spline function with undetermined coefficients after three integrations. The new quartic spline function is ready to be used when all unknown coefficients are found. We also described an algorithm for the new quartic spline method when used to obtain the numerical solution of any first order initial value problem. Two test problems have been used for numerical experimentations purposes. Numerical results seem to indicate that the new quartic spline method is reliable in solving first order initial value problem. We have compared the numerical results generated by the new quartic spline method with those obtained from an existing spline method. Both methods are found to have comparable accuracy.

  15. Lunar soft landing rapid trajectory optimization using direct collocation method and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Tu, Lianghui; Yuan, Jianping; Luo, Jianjun; Ning, Xin; Zhou, Ruiwu

    2007-11-01

    Direct collocation method has been widely used for trajectory optimization. In this paper, the application of direct optimization method (direct collocation method & nonlinear programming (NLP)) to lunar probe soft-landing trajectory optimization is introduced. Firstly, the model of trajectory optimization control problem to lunar probe soft landing trajectory is established and the equations of motion are simplified respectively based on some reasonable hypotheses. Performance is selected to minimize the fuel consumption. The control variables are thrust attack angle and thrust of engine. Terminal state variable constraints are velocity and altitude constraints. Then, the optimal control problem is transformed into nonlinear programming problem using direct collocation method. The state variables and control variables are selected as optimal parameters at all nodes and collocation nodes. Parameter optimization problem is solved using the SNOPT software package. The simulation results demonstrate that the direct collocation method is not sensitive to lunar soft landing initial conditions; they also show that the optimal solutions of trajectory optimization problem are fairly good in real-time. Therefore, the direct collocation method is a viable approach to lunar probe soft landing trajectory optimization problem.

  16. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces

    SciTech Connect

    Liu, Yi-Xin Zhang, Hong-Dong

    2014-06-14

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.

  17. The double exponential sinc collocation method for singular Sturm-Liouville problems

    NASA Astrophysics Data System (ADS)

    Gaudreau, P.; Slevinsky, R.; Safouhi, H.

    2016-04-01

    Sturm-Liouville problems are abundant in the numerical treatment of scientific and engineering problems. In the present contribution, we present an efficient and highly accurate method for computing eigenvalues of singular Sturm-Liouville boundary value problems. The proposed method uses the double exponential formula coupled with sinc collocation method. This method produces a symmetric positive-definite generalized eigenvalue system and has exponential convergence rate. Numerical examples are presented and comparisons with single exponential sinc collocation method clearly illustrate the advantage of using the double exponential formula.

  18. A Chebyshev spectral collocation method using a staggered grid for the stability of cylindrical flows

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.

    1991-01-01

    A staggered spectral collocation method for the stability of cylindrical flows is developed. In this method the pressure is evaluated at different nodal points than the three velocity components. These modified nodal points do not include the two boundary nodes; therefore the need for the two artificial pressure boundary conditions employed by Khorrami et al. is eliminated. It is shown that the method produces very accurate results and has a better convergence rate than the spectral tau formulation. However, through extensive convergence tests it was found that elimination of the artificial pressure boundary conditions does not result in any significant change in the convergence behavior of spectral collocation methods.

  19. THE LOSS OF ACCURACY OF STOCHASTIC COLLOCATION METHOD IN SOLVING NONLINEAR DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S

    2013-01-01

    n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.

  20. A spectral collocation method for compressible, non-similar boundary layers

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Streett, Craig L.

    1991-01-01

    An efficient and highly accurate algorithm based on a spectral collocation method is developed for numerical solution of the compressible, two-dimensional and axisymmetric boundary layer equations. The numerical method incorporates a fifth-order, fully implicit marching scheme in the streamwise (timelike) dimension and a spectral collocation method based on Chebyshev polynomial expansions in the wall-normal (spacelike) dimension. The spectral collocation algorithm is used to derive the nonsimilar mean velocity and temperature profiles in the boundary layer of a 'fuselage' (cylinder) in a high-speed (Mach 5) flow parallel to its axis. The stability of the flow is shown to be sensitive to the gradual streamwise evolution of the mean flow and it is concluded that the effects of transverse curvature on stability should not be ignored routinely.

  1. A Critical Evaluation of the Resolution Properties of B-spline and Compact Finite Difference Methods

    NASA Astrophysics Data System (ADS)

    Kwok, Wai Yip; Moser, Robert D.

    1999-11-01

    There is a great need for turbulence simulations in complex geometries. Such simulations require spacial discretization schemes that not only have good resolution properties (like spectral methods), but can also provide flexibility with respect to geometry, boundary conditions and resolution distribution. Schemes based on B-spline expansions meet these requirements. In this research we compare the resolution properties of B-spline methods and compact finite difference methods, based on Fourier analysis in periodic domains, and tests based on solution of the wave equation and heat equation in finite domains, with uniform and non-uniform grids. Results suggest compact finite difference schemes have better resolution on uniform grids, while B-spline methods offer more consistent resolution regardless of grid distribution. Also, the functional expansion nature of spline methods provides an easy formulation procedure, particularly near boundaries. The spline methods discussed are implemented into a code to solve the compressible Navier-Stokes equations. Results of sample problems such as viscous shock waves and stability waves in a laminar boundary layer will be presented.

  2. Uncertainty quantification for unsaturated flow in porous media: a stochastic collocation method

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, D. M.

    2011-12-01

    We present a stochastic collocation (SC) method to quantify epistemic uncertainty in predictions of unsaturated flow in porous media. SC provides a non-intrusive framework for uncertainty propagation in models based on the non-linear Richards' equation with arbitrary constitutive laws describing soil properties (relative conductivity and retention curve). To illustrate the approach, we use the Richards' equation with the van Genutchen-Mualem model for water retention and relative conductivity to describe infiltration into an initially dry soil whose uncertain parameters are treated as random fields. These parameters are represented using a truncated Karhunen-Loève expansion; Smolyak algorithm is used to construct a structured set of collocation points from univariate Gauss quadrature rules. A resulting deterministic problem is solved for each collocation point, and together with the collocation weights, the statistics of hydraulic head and infiltration rate are computed. The results are in agreement with Monte Carlo simulations. We demonstrate that highly heterogeneous soils (large variances of hydraulic parameters) require cubature formulas of high degree of exactness, while their short correlation lengths increase the dimensionality of the problem. Both effects increase the number of collocation points and thus of deterministic problems to solve, affecting the overall computational cost of uncertainty quantification.

  3. Understanding a reference-free impedance method using collocated piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo

    2010-03-01

    A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.

  4. NOKIN1D: one-dimensional neutron kinetics based on a nodal collocation method

    NASA Astrophysics Data System (ADS)

    Verdú, G.; Ginestar, D.; Miró, R.; Jambrina, A.; Barrachina, T.; Soler, Amparo; Concejal, Alberto

    2014-06-01

    The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method.

  5. Galerkin method for the numerical solution of the RLW equation using quintic B-splines

    NASA Astrophysics Data System (ADS)

    Dag, Idris; Saka, Bulent; Irk, Dursun

    2006-06-01

    The regularized long wave equation (RLW) is solved numerically by using the quintic B-spline Galerkin finite element method. The same method is applied to the time-split RLW equation. Comparison is made with both analytical solutions and some previous results. Propagation of solitary waves, interaction of two solitons are studied.

  6. The extended cubic B-spline algorithm for a modified regularized long wave equation

    NASA Astrophysics Data System (ADS)

    Dağ, İ.; Irk, D.; Sarı, M.

    2013-04-01

    A collocation method based on an extended cubic B-spline function is introduced for the numerical solution of the modified regularized long wave equation. The accuracy of the method is illustrated by studying the single solitary wave propagation and the interaction of two solitary waves of the modified regularized long wave equation.

  7. Direct collocation meshless method for vector radiative transfer in scattering media

    NASA Astrophysics Data System (ADS)

    Ben, Xun; Yi, Hong-Liang; Yin, Xun-Bo; Tan, He-Ping

    2015-09-01

    A direct collocation meshless method based on a moving least-squares approximation is presented to solve polarized radiative transfer in scattering media. Contrasted with methods such as the finite volume and finite element methods that rely on mesh structures (e.g. elements, faces and sides), meshless methods utilize an approximation space based only on the scattered nodes, and no predefined nodal connectivity is required. Several classical cases are examined to verify the numerical performance of the method, including polarized radiative transfer in atmospheric aerosols and clouds with phase functions that are highly elongated in the forward direction. Numerical results show that the collocation meshless method is accurate, flexible and effective in solving one-dimensional polarized radiative transfer in scattering media. Finally, a two-dimensional case of polarized radiative transfer is investigated and analyzed.

  8. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  9. A novel stochastic collocation method for uncertainty propagation in complex mechanical systems

    NASA Astrophysics Data System (ADS)

    Qi, WuChao; Tian, SuMei; Qiu, ZhiPing

    2015-02-01

    This paper presents a novel stochastic collocation method based on the equivalent weak form of multivariate function integral to quantify and manage uncertainties in complex mechanical systems. The proposed method, which combines the advantages of the response surface method and the traditional stochastic collocation method, only sets integral points at the guide lines of the response surface. The statistics, in an engineering problem with many uncertain parameters, are then transformed into a linear combination of simple functions' statistics. Furthermore, the issue of determining a simple method to solve the weight-factor sets is discussed in detail. The weight-factor sets of two commonly used probabilistic distribution types are given in table form. Studies on the computational accuracy and efforts show that a good balance in computer capacity is achieved at present. It should be noted that it's a non-gradient and non-intrusive algorithm with strong portability. For the sake of validating the procedure, three numerical examples concerning a mathematical function with analytical expression, structural design of a straight wing, and flutter analysis of a composite wing are used to show the effectiveness of the guided stochastic collocation method.

  10. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  11. Spurious Modes in Spectral Collocation Methods with Two Non-Periodic Directions

    NASA Technical Reports Server (NTRS)

    Balachandar, S.; Madabhushi, Ravi K.

    1992-01-01

    Collocation implementation of the Kleiser-Schumann's method in geometries with two non-periodic directions is shown to suffer from three spurious modes - line, column and checkerboard - contaminating the computed pressure field. The corner spurious modes are also present but they do not affect evaluation of pressure related quantities. A simple methodology in the inversion of the influence matrix will efficiently filter out these spurious modes.

  12. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  13. Numerical experiments involving Galerkin and collocation methods for linear integral equations of the first kind

    SciTech Connect

    Allen, R.C. Jr.; Boland, W.R.; Wing, G.M.

    1983-03-01

    Recently effrots have been made to quantify the difficulties inherent in numerically sovling linear Fredholm integral equations of the first kind (J. Integral Equations, to appear.). In particular, the classical quadrature approach, collocation methods, and Galerkin schemes that make use of various orthonormal basis functions have been shown to lead to matrices with high condition numbers. In fact, it has been possible to obtain explicit lower bounds on these condition numbers as a function of the smoothness of the kernel, essentially independent of the choice of orthonormal basis. These bounds all approach infinity as the number of basis functions increases. In this article we present a numerical study of condition numbers arising from collocation and Galerkin methods with step-function and Legendre polynomial bases. The condition number for each kernel and basis set studied is exhibited as a function of the number of basis functions used. The effect that these ill-conditioned matrices have on the accuracy of solutions is demonstrated computationally. The information obtained gives an indication of the efficacy: and the dangers: of the collocation and Galerkin schemes in practical situations.

  14. A spectral collocation method for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Zang, T. A.; Hussaini, M. Y.

    1984-01-01

    A Fourier-Chebyshev spectral method for the incompressible Navier-Stokes equations is described. It is applicable to a variety of problems including some with fluid properties which vary strongly both in the normal direction and in time. In this fully spectral algorithm, a preconditioned iterative technique is used for solving the implicit equations arising from semi-implicit treatment of pressure, mean advection and vertical diffusion terms. The algorithm is tested by applying it to hydrodynamic stability problems in channel flow and in external boundary layers with both constant and variable viscosity.

  15. A force identification method using cubic B-spline scaling functions

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Luo, Xinjie; Chen, Xuefeng

    2015-02-01

    For force identification, the solution may differ from the desired force seriously due to the unknown noise included in the measured data, as well as the ill-posedness of inverse problem. In this paper, an efficient basis function expansion method based on wavelet multi-resolution analysis using cubic B-spline scaling functions as basis functions is proposed for identifying force history with high accuracy, which can overcome the deficiency of the ill-posed problem. The unknown force is approximated by a set of translated cubic B-spline scaling functions at a certain level and thereby the original governing equation of force identification is reformulated to find the coefficients of scaling functions, which yields a well-posed problem. The proposed method based on wavelet multi-resolution analysis has inherent numerical regularization for inverse problem by changing the level of scaling functions. The number of basis functions employed to approximate the identified force depends on the level of scaling functions. A regularization method for selecting the optimal level of cubic B-spline scaling functions by virtue of condition number of matrix is proposed. In this paper, the validity and applicability of the proposed method are illustrated by two typical examples of Volterra-Fredholm integral equations that are both typical ill-posed problems. Force identification experiments including impact and harmonic forces are conducted on a cantilever beam to compare the accuracy and efficiency of the proposed method with that of the truncated singular value decomposition (TSVD) technique.

  16. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  17. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  18. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  19. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    SciTech Connect

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-11-20

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.

  20. Some Optimal Runge-Kutta Collocation Methods for Stiff Problems and DAEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernández-Abreu, D.; Montijano, J. I.

    2008-09-01

    A new family of implicit Runge-Kutta methods was introduced at ICCAM 2008 (Gent) by the present authors. This family of methods is intended to solve numerically stiff problems and DAEs. The s-stage method (for s⩾3) has the following features: it is a collocation method depending on a real free parameter β, has classical convergence order 2s-3 and is strongly A-stable for β ranging in some nonempty open interval Is = (-γs,0). In addition, for β∈Is, all the collocation nodes fall in the interval [0,1]. Moreover, these methods also involve a similar computational cost as that of the corresponding counterpart in the Runge-Kutta Radau IIA family (the method having the same classical order) when solving for their stage values. However, our methods have the additional advantage of possessing a higher stage order than the respective Radau IIA counterparts. This circumstance is important when integrating stiff problems in which case most of numerical methods are affected by an order reduction. In this talk we discuss how to optimize the free parameter depending on the special features of the kind of stiff problems and DAEs to be solved. This point is highly important in order to make competitive our methods when compared with those of the Radau IIA family.

  1. Numerical solution of differential-difference equations in large intervals using a Taylor collocation method

    NASA Astrophysics Data System (ADS)

    Tirani, M. Dadkhah; Sohrabi, F.; Almasieh, H.; Kajani, M. Tavassoli

    2015-10-01

    In this paper, a collocation method based on Taylor polynomials is developed for solving systems linear differential-difference equations with variable coefficients defined in large intervals. By using Taylor polynomials and their properties in obtaining operational matrices, the solution of the differential-difference equation system with given conditions is reduced to the solution of a system of linear algebraic equations. We first divide the large interval into M equal subintervals and then Taylor polynomials solutions are obtained in each interval, separately. Some numerical examples are given and results are compared with analytical solutions and other techniques in the literature to demonstrate the validity and applicability of the proposed method.

  2. Legendre spectral-collocation method for solving some types of fractional optimal control problems

    PubMed Central

    Sweilam, Nasser H.; Al-Ajami, Tamer M.

    2014-01-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937

  3. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  4. Numerical simulation of compressible multiphase flows using the Parallel Adaptive Wavelet-Collocation Method

    NASA Astrophysics Data System (ADS)

    Aslani, Mohamad; Regele, Jonathan

    2015-11-01

    Numerical simulation of incompressible multiphase flows to describe fluid atomization is becoming more common. However, compressible multiphase flow simulations are mostly limited to shock-bubble interactions with only a few studies involving shock waves impacting liquid droplets. A methodology for simulating compressible multiphase flow is developed from existing approaches for the Parallel Adaptive Wavelet-Collocation Method. The method uses an interface capturing function with a steepening procedure for the fluid interface. Simulations of shock waves impacting liquid droplets illustrate the numerical capabilities.

  5. Numerical Algorithm Based on Haar-Sinc Collocation Method for Solving the Hyperbolic PDEs

    PubMed Central

    Javadi, H. H. S.; Navidi, H. R.

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  6. Subdomain finite element method with quartic B-splines for the modified equal width wave equation

    NASA Astrophysics Data System (ADS)

    Geyikli, T.; Karakoc, S. B. G.

    2015-03-01

    In this paper, a numerical solution of the modified equal width wave (MEW) equation, has been obtained by a numerical technique based on Subdomain finite element method with quartic B-splines. Test problems including the motion of a single solitary wave and interaction of two solitary waves are studied to validate the suggested method. Accuracy and efficiency of the proposed method are discussed by computing the numerical conserved laws and error norms L 2 and L ∞. A linear stability analysis based on a Fourier method shows that the numerical scheme is unconditionally stable.

  7. Sinc-Chebyshev Collocation Method for a Class of Fractional Diffusion-Wave Equations

    PubMed Central

    Mao, Zhi; Xiao, Aiguo; Yu, Zuguo; Shi, Long

    2014-01-01

    This paper is devoted to investigating the numerical solution for a class of fractional diffusion-wave equations with a variable coefficient where the fractional derivatives are described in the Caputo sense. The approach is based on the collocation technique where the shifted Chebyshev polynomials in time and the sinc functions in space are utilized, respectively. The problem is reduced to the solution of a system of linear algebraic equations. Through the numerical example, the procedure is tested and the efficiency of the proposed method is confirmed. PMID:24977177

  8. A semi-implicit collocation method - Application to thermal convection in 2D compressible fluids

    NASA Astrophysics Data System (ADS)

    Gauthier, Serge

    1991-06-01

    A semiimplicit pseudo-spectral collocation method using a third-order Runge-Kutta numerical scheme for the full Navier-Stokes equations is described. The Courant-Friedrichs-Lewy condition is overcome by the implicit handling of a diffusive term. All such terms are solved with an iterative scheme in the Fourier space. Simulation of thermal convection in 2D compressible fluids is made by expanding variables on a Fourier-Chebyshev basis. Examples of subsonic and supersonic steady solutions are given in the case where the heat flux at the upper boundary is governed by a black body.

  9. Simulating the focusing of light onto 1D nanostructures with a B-spline modal method

    NASA Astrophysics Data System (ADS)

    Bouchon, P.; Chevalier, P.; Héron, S.; Pardo, F.; Pelouard, J.-L.; Haïdar, R.

    2015-03-01

    Focusing the light onto nanostructures thanks to spherical lenses is a first step to enhance the field, and is widely used in applications, in particular for enhancing non-linear effects like the second harmonic generation. Nonetheless, the electromagnetic response of such nanostructures, which have subwavelength patterns, to a focused beam can not be described by the simple ray tracing formalism. Here, we present a method to compute the response to a focused beam, based on the B-spline modal method. The simulation of a gaussian focused beam is obtained thanks to a truncated decomposition on plane waves computed on a single period, which limits the computation burden.

  10. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  11. A meshfree local RBF collocation method for anti-plane transverse elastic wave propagation analysis in 2D phononic crystals

    NASA Astrophysics Data System (ADS)

    Zheng, Hui; Zhang, Chuanzeng; Wang, Yuesheng; Sladek, Jan; Sladek, Vladimir

    2016-01-01

    In this paper, a meshfree or meshless local radial basis function (RBF) collocation method is proposed to calculate the band structures of two-dimensional (2D) anti-plane transverse elastic waves in phononic crystals. Three new techniques are developed for calculating the normal derivative of the field quantity required by the treatment of the boundary conditions, which improve the stability of the local RBF collocation method significantly. The general form of the local RBF collocation method for a unit-cell with periodic boundary conditions is proposed, where the continuity conditions on the interface between the matrix and the scatterer are taken into account. The band structures or dispersion relations can be obtained by solving the eigenvalue problem and sweeping the boundary of the irreducible first Brillouin zone. The proposed local RBF collocation method is verified by using the corresponding results obtained with the finite element method. For different acoustic impedance ratios, various scatterer shapes, scatterer arrangements (lattice forms) and material properties, numerical examples are presented and discussed to show the performance and the efficiency of the developed local RBF collocation method compared to the FEM for computing the band structures of 2D phononic crystals.

  12. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  13. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  14. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  15. Free vibration analysis of stiffened plates with arbitrary planform by the general spline finite strip method

    NASA Astrophysics Data System (ADS)

    Sheikh, A. H.; Mukhopakhyay, M.

    1993-03-01

    The spline finite strip method which has long been applied to the vibration analysis of bare plate has been extended in this paper to stiffened plates having arbitrary shapes. Both concentrically and eccentrically stiffened plate have been analyzed. The main elegance of the formulation lies in the treatment of the stiffeners. The stiffeners can be placed anywhere within the plate strip and need not be placed on the nodal lines. Stiffened plates having various shapes, boundary conditions, and various disposition of stiffeners have been analyzed by the proposed approach. Comparison with published results indicates excellent agreement.

  16. A Chebyshev Collocation Method for Moving Boundaries, Heat Transfer, and Convection During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.

    1994-01-01

    Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.

  17. A novel monocular visual navigation method for cotton-picking robot based on horizontal spline segmentation

    NASA Astrophysics Data System (ADS)

    Xu, ShengYong; Wu, JuanJuan; Zhu, Li; Li, WeiHao; Wang, YiTian; Wang, Na

    2015-12-01

    Visual navigation is a fundamental technique of intelligent cotton-picking robot. There are many components and cover in the cotton field, which make difficulties of furrow recognition and trajectory extraction. In this paper, a new field navigation path extraction method is presented. Firstly, the color image in RGB color space is pre-processed by the OTSU threshold algorithm and noise filtering. Secondly, the binary image is divided into numerous horizontally spline areas. In each area connected regions of neighboring images' vertical center line are calculated by the Two-Pass algorithm. The center points of the connected regions are candidate points for navigation path. Thirdly, a series of navigation points are determined iteratively on the principle of the nearest distance between two candidate points in neighboring splines. Finally, the navigation path equation is fitted by the navigation points using the least squares method. Experiments prove that this method is accurate and effective. It is suitable for visual navigation in the complex environment of cotton field in different phases.

  18. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  19. Radial collocation methods for the onset of convection in rotating spheres

    NASA Astrophysics Data System (ADS)

    Sánchez, J.; Garcia, F.; Net, M.

    2016-03-01

    The viability of using collocation methods in radius and spherical harmonics in the angular variables to calculate convective flows in full spherical geometry is examined. As a test problem the stability of the conductive state of a self-gravitating fluid sphere subject to rotation and internal heating is considered. A study of the behavior of different radial meshes previously used by several authors in polar coordinates, including or not the origin, is first performed. The presence of spurious modes due to the treatment of the singularity at the origin, to the spherical harmonics truncation, and to the initialization of the eigenvalue solver is shown, and ways to eliminate them are presented. Finally, to show the usefulness of the method, the neutral stability curves at very high Taylor and moderate and small Prandtl numbers are calculated and shown.

  20. A new method for solving fixed geodetic boundary-value problem based on harmonic splines

    NASA Astrophysics Data System (ADS)

    Safari, Abdolreza; Sharifi, Mohammad Ali

    2010-05-01

    Nowadays, the determination of the earth gravity field has various applications in geodesy and geophysics. The gravity field of the earth is determined via solution of a boundary value problem. Gravimetric data with high quality at the earth surface for Earth's gravity field determination are available. These data provide the necessary boundary data to solve our BVP. Because of precise GNSS-based positioning in gravimetric stations, boundary is assumed as a fixed surface. In this paper, a new method to solve fixed BVP based on harmonic splines is discussed. Algorithmic steps to solve fixed boundary value problem for the Earth surface computation can be described as follows: (i) Remove the effect of a high degree/order ellipsoidal harmonic expansion and centrifugal field at the observation point at the Earth surface (ii) Remove the effect of residual terrain at the observation point (iii) Find an approximation to gravity disturbances at the Earth surfaces by using harmonic spline (iv) Restore the effect of reference field and residual terrain at the surface of the Earth. this new methodology is successfully tested by computation of the surface potential at the west area of Iran.

  1. Linear B-spline finite element method for the improved Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Lin, Qun; Wu, Yong Hong; Loxton, Ryan; Lai, Shaoyong

    2009-02-01

    In this paper, we develop and validate a numerical procedure for solving a class of initial boundary value problems for the improved Boussinesq equation. The finite element method with linear B-spline basis functions is used to discretize the nonlinear partial differential equation in space and derive a second order system involving only ordinary derivatives. It is shown that the coefficient matrix for the second order term in this system is invertible. Consequently, for the first time, the initial boundary value problem can be reduced to an explicit initial value problem to which many accurate numerical methods are readily applicable. Various examples are presented to validate this technique and demonstrate its capacity to simulate wave splitting, wave interaction and blow-up behavior.

  2. The spline-Laplacian in clinical neurophysiology: a method to improve EEG spatial resolution.

    PubMed

    Nunez, P L; Pilgreen, K L

    1991-10-01

    An important goal of EEG research is to obtain practical methods to improve the spatial resolution of scalp-recorded potentials, i.e., to make surface data more accurately represent local underlying brain sources. This goal may be somewhat different from that of "localizing brain activity with EEG," since the latter approach often involves prior assumptions about the nature of sources. In this paper, we demonstrate that spline-Laplacian, a relatively new approach that can yield dramatic improvement in spatial resolution when average electrode spacing is less than about 3 cm. This approach is mostly independent of assumptions about sources and models of the head. The demonstration involves computer simulations, evoked potentials, normal spontaneous EEG, and epileptic spikes. PMID:1761706

  3. A boundary collocation meshfree method for the treatment of Poisson problems with complex morphologies

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Mai, Weijie; Liang, Bowen; Buchheit, Rudolph G.

    2015-01-01

    A new meshfree method based on a discrete transformation of Green's basis functions is introduced to simulate Poisson problems with complex morphologies. The proposed Green's Discrete Transformation Method (GDTM) uses source points that are located along a virtual boundary outside the problem domain to construct the basis functions needed to approximate the field. The optimal number of Green's functions source points and their relative distances with respect to the problem boundaries are evaluated to obtain the best approximation of the partition of unity condition. A discrete transformation technique together with the boundary point collocation method is employed to evaluate the unknown coefficients of the solution series via satisfying the problem boundary conditions. A comprehensive convergence study is presented to investigate the accuracy and convergence rate of the GDTM. We will also demonstrate the application of this meshfree method for simulating the conductive heat transfer in a heterogeneous materials system and the dissolved aluminum ions concentration in the electrolyte solution formed near a passive corrosion pit.

  4. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows.

    PubMed

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential convergence is achieved rather than polynomial rates. The solution methodology proposed, the CCSLBM, is also extended to three dimensions and a 3D regularized cavity is simulated; the corresponding results are presented and validated. Indications are that the CCSLBM developed and applied herein is robust, efficient, and accurate for computing 2D and 3D low-speed flows. Note also that high-accuracy solutions obtained by applying the CCSLBM can be used as benchmark solutions for the assessment of other LBM-based flow solvers. PMID:25679733

  5. Data assimilation and uncertainty assessment for watershed water quality models: Probabilistic Collocation Method (PCM) based approaches

    NASA Astrophysics Data System (ADS)

    Wu, B.; Zheng, Y.

    2012-12-01

    Watershed water quality models are increasingly used in management practices. However, simulations by such complex models often involve significant uncertainty, and observational data for the models to assimilate are usually scarce. In one of our previous studies, a Probabilistic Collocation Method (PCM) based approach was developed to efficiently conduct uncertainty analysis (UA) and global parameter sensitivity analysis (SA), with no data assimilation considered. In this study, the PCM approach was coupled with Shuffled Complex Evolution (SCE-UA) and Markov chain Monte Carlo (MCMC) methods to perform model calibration and UA, respectively, with data assimilation. The PCM-based approaches were applied to a SWAT (Soil and Water Assessment Tool) model of sediment and total nitrogen pollution in the Newport Bay watershed (Southern California). Different error assumptions were tested. The major findings are: 1) under certain error scenarios, the PCM-based approaches can provide adequate calibration and uncertainty results with much less computational cost, compared to traditional methods; 2) the PCM-based global SA can generate critical information on the relative importance of model parameters, which helps accelerate the data assimilation process; and 3) based on a systematic assessment of uncertainty using computationally efficient techniques like the PCM-based approach, cost-effective strategies of water quality data acquisition can be appropriately designed.

  6. Spline Histogram Method for Reconstruction of Probability Density Functions of Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Docenko, Dmitrijs; Berzins, Karlis

    We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from www.virac.lv/en/soft.html.

  7. Estimation of river pollution source using the space-time radial basis collocation method

    NASA Astrophysics Data System (ADS)

    Li, Zi; Mao, Xian-Zhong; Li, Tak Sing; Zhang, Shiyan

    2016-02-01

    River contaminant source identification problems can be formulated as an inverse model to estimate the missing source release history from the observed contaminant plume. In this study, the identification of pollution sources in rivers, where strong advection is dominant, is solved by the global space-time radial basis collocation method (RBCM). To search for the optimal shape parameter and scaling factor which strongly determine the accuracy of the RBCM method, a new cost function based on the residual errors of not only the observed data but also the specified governing equation, the initial and boundary conditions, was constructed for the k-fold cross-validation technique. The performance of three global radial basis functions, Hardy's multiquadric, inverse multiquadric and Gaussian, were also compared in the test cases. The numerical results illustrate that the new cost function is a good indicator to search for near-optimal solutions. Application to a real polluted river shows that the source release history is reasonably recovered, demonstrating that the RBCM with the k-fold cross-validation is a powerful tool for source identification problems in advection-dominated rivers.

  8. Membrane covered duct lining for high-frequency noise attenuation: prediction using a Chebyshev collocation method.

    PubMed

    Huang, Lixi

    2008-11-01

    A spectral method of Chebyshev collocation with domain decomposition is introduced for linear interaction between sound and structure in a duct lined with flexible walls backed by cavities with or without a porous material. The spectral convergence is validated by a one-dimensional problem with a closed-form analytical solution, and is then extended to the two-dimensional configuration and compared favorably against a previous method based on the Fourier-Galerkin procedure and a finite element modeling. The nonlocal, exact Dirichlet-to-Neumann boundary condition is embedded in the domain decomposition scheme without imposing extra computational burden. The scheme is applied to the problem of high-frequency sound absorption by duct lining, which is normally ineffective when the wavelength is comparable with or shorter than the duct height. When a tensioned membrane covers the lining, however, it scatters the incident plane wave into higher-order modes, which then penetrate the duct lining more easily and get dissipated. For the frequency range of f=0.3-3 studied here, f=0.5 being the first cut-on frequency of the central duct, the membrane cover is found to offer an additional 0.9 dB attenuation per unit axial distance equal to half of the duct height. PMID:19045780

  9. High-order numerical solutions using cubic splines

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.

  10. The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data.

    SciTech Connect

    Webster, Clayton; Tempone, Raul; Nobile, Fabio

    2007-12-01

    This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

  11. A Critical Evaluation of the Resolution Properties of B-Spline and Compact Finite Difference Methods

    NASA Astrophysics Data System (ADS)

    Kwok, Wai Yip; Moser, Robert D.; Jiménez, Javier

    2001-12-01

    Resolution properties of B-spline and compact finite difference schemes are compared using Fourier analysis in periodic domains, and tests based on solution of the wave and heat equations in finite domains, with uniform and nonuniform grids. Results show that compact finite difference schemes have a higher convergence rate and in some cases better resolution. However, B-spline schemes have a more straightforward and robust formulation, particularly near boundaries on nonuniform meshes.

  12. A self-consistent estimate for linear viscoelastic polycrystals with internal variables inferred from the collocation method

    NASA Astrophysics Data System (ADS)

    Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.

    2012-03-01

    The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.

  13. Assessing leakage detectability at geologic CO2 sequestration sites using the probabilistic collocation method

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Zeidouni, Mehdi; Nicot, Jean-Philippe; Lu, Zhiming; Zhang, Dongxiao

    2013-06-01

    We present an efficient methodology for assessing leakage detectability at geologic carbon sequestration sites under parameter uncertainty. Uncertainty quantification (UQ) and risk assessment are integral and, in many countries, mandatory components of geologic carbon sequestration projects. A primary goal of risk assessment is to evaluate leakage potential from anthropogenic and natural features, which constitute one of the greatest threats to the integrity of carbon sequestration repositories. The backbone of our detectability assessment framework is the probability collocation method (PCM), an efficient, nonintrusive, uncertainty-quantification technique that can enable large-scale stochastic simulations that are based on results from only a small number of forward-model runs. The metric for detectability is expressed through an extended signal-to-noise ratio (SNR), which incorporates epistemic uncertainty associated with both reservoir and aquifer parameters. The spatially heterogeneous aquifer hydraulic conductivity is parameterized using Karhunen-Loève (KL) expansion. Our methodology is demonstrated numerically for generating probability maps of pressure anomalies and for calculating SNRs. Results indicate that the likelihood of detecting anomalies depends on the level of uncertainty and location of monitoring wells. A monitoring well located close to leaky locations may not always yield the strongest signal of leakage when the level of uncertainty is high. Therefore, our results highlight the need for closed-loop site characterization, monitoring network design, and leakage source detection.

  14. Accuracy of the collocated transfer standard method for wind instrument auditing

    NASA Astrophysics Data System (ADS)

    Lockhart, Thomas J.

    1989-08-01

    The application of collocated data collection for the purpose of estimating the accuracy of an operating wind instrument requires some baseline demonstrating the best agreement which can be expected. A series of data were carefully taken in 1982 from six different collocated wind instruments. The published reports of these data suggest that the best agreement from averaged wind-speed measurements will be between 0.3 and 0.5 m/s, and for wind direction it will be 4 to 6 degrees. A new analysis of the same data reduces the best expected agreement to about 0.2 m/s and 2 degrees. The several reasons for claiming the better potential accuracy for collocated measurement (auditing) with calibrated transfer standard instruments are discussed.

  15. Simulation of 2-D nonlinear waves using finite element method with cubic spline approximation

    NASA Astrophysics Data System (ADS)

    Sriram, V.; Sannasiraj, S. A.; Sundar, V.

    2006-07-01

    The estimation of forces and responses due to the nonlinearities in ocean waves is vital in the design of offshore structures, as these forces and responses would result in the extreme loads. Simulation of such events in a laboratory is quite laborious. Even for the preparation of the driving signals for the wave boards, one needs to resort to numerical models. In order to achieve this task, the two-dimensional time domain nonlinear problem has received considerable attention in recent years, in which a mixed Eulerian and Lagrangian method (MEL) is being used. Most of the conventional methods need the free surface to be smoothed or regridded at a particular/every time step of the simulation due to Lagrangian characteristics of motion even for a short time. This would cause numerical diffusion of energy in the system after a long time. In order to minimize this effect, the present study aims at fitting the free surface using a cubic spline approximation with a finite element approach for discretizing the domain. By doing so, the requirement of smoothing/regridding becomes a minimum. The efficiency of the present simulation procedure is shown for the standing wave problem. The application of this method to the problem of sloshing and wave interaction with a submerged obstacle has been carried out.

  16. Using the Stochastic Collocation Method for the Uncertainty Quantification of Drug Concentration Due to Depot Shape Variability

    PubMed Central

    Preston, J. Samuel; Tasdizen, Tolga; Terry, Christi M.; Cheung, Alfred K.

    2010-01-01

    Numerical simulations entail modeling assumptions that impact outcomes. Therefore, characterizing, in a probabilistic sense, the relationship between the variability of model selection and the variability of outcomes is important. Under certain assumptions, the stochastic collocation method offers a computationally feasible alternative to traditional Monte Carlo approaches for assessing the impact of model and parameter variability. We propose a framework that combines component shape parameterization with the stochastic collocation method to study the effect of drug depot shape variability on the outcome of drug diffusion simulations in a porcine model. We use realistic geometries segmented from MR images and employ level-set techniques to create two alternative univariate shape parameterizations. We demonstrate that once the underlying stochastic process is characterized, quantification of the introduced variability is quite straightforward and provides an important step in the validation and verification process. PMID:19272865

  17. Boundary element method using B-splines with applications to groundwater flow

    SciTech Connect

    Cabral, J.J.S.P.

    1992-01-01

    The Boundary Element Method (BEM) is now established as a suitable and efficient technique for the analysis of engineering problems. However, as in other discretization procedures, inaccuracies can be introduced as a result of the lack of derivative continuity between adjacent elements. A new element formulation has been developed for BEM analysis using uniform cubic B-splines. These functions can be employed to provide higher degrees of continuity along the geometric boundary of the region, and also as interpolation functions for the problem variables. The formulation was then extended to include multiple knots and non-uniform blending functions. In this way, it is possible to lower the degree of continuity of the main variable at points of geometric discontinuity. Initially, applications are presented related to potential problems governed by Laplace's equation but there are no restrictions in the formulation regarding its extension to other physical problems. Continuity of the derivatives of the main variable is important to obtain a good representation of moving boundaries with iterative or time-marching schemes. This formulation is applied to stead-state and transient unconfined flow in homogeneous and inhomogeneous porous media. Finally, the formulation is applied to saltwater intrusion problems in confined, leaky and unconfined aquifers.

  18. Cardiac Position Sensitivity Study in the Electrocardiographic Forward Problem Using Stochastic Collocation and Boundary Element Methods

    PubMed Central

    Swenson, Darrell J.; Geneser, Sarah E.; Stinstra, Jeroen G.; Kirby, Robert M.; MacLeod, Rob S.

    2012-01-01

    The electrocardiogram (ECG) is ubiquitously employed as a diagnostic and monitoring tool for patients experiencing cardiac distress and/or disease. It is widely known that changes in heart position resulting from, for example, posture of the patient (sitting, standing, lying) and respiration significantly affect the body-surface potentials; however, few studies have quantitatively and systematically evaluated the effects of heart displacement on the ECG. The goal of this study was to evaluate the impact of positional changes of the heart on the ECG in the specific clinical setting of myocardial ischemia. To carry out the necessary comprehensive sensitivity analysis, we applied a relatively novel and highly efficient statistical approach, the generalized polynomial chaos-stochastic collocation method, to a boundary element formulation of the electrocardiographic forward problem, and we drove these simulations with measured epicardial potentials from whole-heart experiments. Results of the analysis identified regions on the body-surface where the potentials were especially sensitive to realistic heart motion. The standard deviation (STD) of ST-segment voltage changes caused by the apex of a normal heart, swinging forward and backward or side-to-side was approximately 0.2 mV. Variations were even larger, 0.3 mV, for a heart exhibiting elevated ischemic potentials. These variations could be large enough to mask or to mimic signs of ischemia in the ECG. Our results suggest possible modifications to ECG protocols that could reduce the diagnostic error related to postural changes in patients possibly suffering from myocardial ischemia. PMID:21909818

  19. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1986-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator-theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are used to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  20. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1985-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  1. Ray-tracing method for creeping waves on arbitrarily shaped nonuniform rational B-splines surfaces.

    PubMed

    Chen, Xi; He, Si-Yuan; Yu, Ding-Feng; Yin, Hong-Cheng; Hu, Wei-Dong; Zhu, Guo-Qiang

    2013-04-01

    An accurate creeping ray-tracing algorithm is presented in this paper to determine the tracks of creeping waves (or creeping rays) on arbitrarily shaped free-form parametric surfaces [nonuniform rational B-splines (NURBS) surfaces]. The main challenge in calculating the surface diffracted fields on NURBS surfaces is due to the difficulty in determining the geodesic paths along which the creeping rays propagate. On one single parametric surface patch, the geodesic paths need to be computed by solving the geodesic equations numerically. Furthermore, realistic objects are generally modeled as the union of several connected NURBS patches. Due to the discontinuity of the parameter between the patches, it is more complicated to compute geodesic paths on several connected patches than on one single patch. Thus, a creeping ray-tracing algorithm is presented in this paper to compute the geodesic paths of creeping rays on the complex objects that are modeled as the combination of several NURBS surface patches. In the algorithm, the creeping ray tracing on each surface patch is performed by solving the geodesic equations with a Runge-Kutta method. When the creeping ray propagates from one patch to another, a transition method is developed to handle the transition of the creeping ray tracing across the border between the patches. This creeping ray-tracing algorithm can meet practical requirements because it can be applied to the objects with complex shapes. The algorithm can also extend the applicability of NURBS for electromagnetic and optical applications. The validity and usefulness of the algorithm can be verified from the numerical results. PMID:23595326

  2. Algebraic grid adaptation method using non-uniform rational B-spline surface modeling

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, B. K.

    1992-01-01

    An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.

  3. A meshless scheme for partial differential equations based on multiquadric trigonometric B-spline quasi-interpolation

    NASA Astrophysics Data System (ADS)

    Gao, Wen-Wu; Wang, Zhi-Gang

    2014-11-01

    Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions.

  4. Splines for diffeomorphisms.

    PubMed

    Singh, Nikhil; Vialard, François-Xavier; Niethammer, Marc

    2015-10-01

    This paper develops a method for higher order parametric regression on diffeomorphisms for image regression. We present a principled way to define curves with nonzero acceleration and nonzero jerk. This work extends methods based on geodesics which have been developed during the last decade for computational anatomy in the large deformation diffeomorphic image analysis framework. In contrast to previously proposed methods to capture image changes over time, such as geodesic regression, the proposed method can capture more complex spatio-temporal deformations. We take a variational approach that is governed by an underlying energy formulation, which respects the nonflat geometry of diffeomorphisms. Such an approach of minimal energy curve estimation also provides a physical analogy to particle motion under a varying force field. This gives rise to the notion of the quadratic, the cubic and the piecewise cubic splines on the manifold of diffeomorphisms. The variational formulation of splines also allows for the use of temporal control points to control spline behavior. This necessitates the development of a shooting formulation for splines. The initial conditions of our proposed shooting polynomial paths in diffeomorphisms are analogous to the Euclidean polynomial coefficients. We experimentally demonstrate the effectiveness of using the parametric curves both for synthesizing polynomial paths and for regression of imaging data. The performance of the method is compared to geodesic regression. PMID:25980676

  5. Interchangeable spline reference guide

    SciTech Connect

    Dolin, R.M.

    1994-05-01

    The WX-Division Integrated Software Tools (WIST) Team evolved from two previous committees, First was the W78 Solid Modeling Pilot Project`s Spline Subcommittee, which later evolved into the Vv`X-Division Spline Committee. The mission of the WIST team is to investigate current CAE engineering processes relating to complex geometry and to develop methods for improving those processes. Specifically, the WIST team is developing technology that allows the Division to use multiple spline representations. We are also updating the contour system (CONSYS) data base to take full advantage of the Division`s expanding electronic engineering process. Both of these efforts involve developing interfaces to commercial CAE systems and writing new software. The WIST team is comprised of members from V;X-11, -12 and 13. This {open_quotes}cross-functional{close_quotes} approach to software development is somewhat new in the Division so an effort is being made to formalize our processes and assure quality at each phase of development. Chapter one represents a theory manual and is one phase of the formal process. The theory manual is followed by a software requirements document, specification document, software verification and validation documents. The purpose of this guide is to present the theory underlying the interchangeable spline technology and application. Verification and validation test results are also presented for proof of principal.

  6. Application of collocation spectral domain decomposition method to solve radiative heat transfer in 2D partitioned domains

    NASA Astrophysics Data System (ADS)

    Chen, Shang-Shang; Li, Ben-Wen

    2014-12-01

    A collocation spectral domain decomposition method (CSDDM) based on the influence matrix technique is developed to solve radiative transfer problems within a participating medium of 2D partitioned domains. In this numerical approach, the spatial domains of interest are decomposed into rectangular sub-domains. The radiative transfer equation (RTE) in each sub-domain is angularly discretized by the discrete ordinates method (DOM) with the SRAPN quadrature scheme and then is solved by the CSDDM directly. Three test geometries that include square enclosure and two enclosures with one baffle and one centered obstruction are used to validate the accuracy of the developed method and their numerical results are compared to the data obtained by other researchers. These comparisons indicate that the CSDDM has a good accuracy for all solutions. Therefore this method can be considered as a useful approach for the solution of radiative heat transfer problems in 2D partitioned domains.

  7. Split spline screw

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.

  8. Split spline screw

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1993-11-01

    A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.

  9. A new version of the map of recent vertical crustal movements in the Carpatho-Balkan region based on the collocation method

    NASA Astrophysics Data System (ADS)

    Nikonov, A. A.; Skryl, V. A.; Lisovets, A. G.

    1987-12-01

    The paper is concerned with the problem of adequate mapping of resulting discrete values of the movement velocity as a field. The authors used a statistical method (collocation) to represent measurement results for the Carpatho-Balkan region as a field of recent crustal movement velocity. The collocation method gives a possibility of avoiding a strong influence that the available geomorphological and geological knowledge of the area and individual notions of map-makers have on the drawing of isolines of velocity movements. The new version of the map of recent vertical movements in the Carpatho-Balkan region shows some notable differences from the 1979 version.

  10. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  11. a High-Order B-Spline Based Panel Method for Unsteady, Nonlinear, Three-Dimensional Free-Surface Flows.

    NASA Astrophysics Data System (ADS)

    Coaxley, Philip Scott

    1995-11-01

    A novel high-order panel method is developed for potential flows. These Uniform Bi-Cubic B-Spline (UBCBS) panels are intended for the simulation of unsteady, nonlinear, three-dimensional, free-surface waves. The integral-equation -based method is formulated using Green's theorem, with singular-kernel integrals which are evaluated numerically using a series of variable transformations to desingularize the kernels. The issue of edge conditions for surfaces defined by B-splines is addressed, and the edge conditions are integrated into the panel method. A novel implementation of the familiar Orlanski radiation conditions is developed specifically for use with these panels. Unlike existing panel methods, UBCBS panels guarantee a continuous normal vector everywhere on the discretized free-surface. They are high-order, allowing representation of complex wave forms with relatively few grid points. Analytical expressions are presented for the spatial derivatives of the potential, allowing evaluation of the velocity vector at any point on the problem boundary, without finite differencing. Free -surface boundary conditions are imposed without the introduction of numerical damping, yet the computations appear stable. Validation and convergence studies are conducted by comparing computed results with analytical solutions for several test cases, including the well-known Cauchy -Poisson wave elevation problem. Computations are presented for the nonlinear Cauchy -Poisson problem and for the nonlinear waves generated by a translating submerged spheroid. The results provide a high level of detail using a relatively small number of gridpoints. Pressure is easily and accurately evaluated. Symmetric edge conditions are shown to be effective for modeling a plane of symmetry, and the radiation condition effectively suppresses spurious reflections at the open boundary.

  12. Discretely conservative, non-dissipative, and stable collocated method for solving the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Ranjan, Reetesh; Pantano, Carlos

    2010-11-01

    We present a new method for solving the incompressible Navier-Stokes equations. The method utilizes a collocated arrangement of all variables in space. It uses centered second-order accurate finite-difference approximations for all spatial derivatives and a third-order IMEX approach for time integration. The proposed method ensures discrete conservation of mass and momentum by discretizing the conservative form of the equations from the outset and never relying on continuum relations afterward. This ensures uniform high order of accuracy in time for all fields, including pressure. The pressure-momentum coupled equations can be easily segregated and solved sequentially, as in the pressure projection method but without a splitting error. In this approach there are no spurious kernel modes, checkerboard, in the embedded elliptic pressure problem. The method has been applied to different canonical problems, including a fully periodic box, a periodic channel, an inflow-outflow channel and a lid-driven cavity flow. Near wall boundaries, spatial derivatives are obtained using the weak form of the conservation equations, similar to a finite element approach. The results from some of the sample cases will be presented to illustrate the features of the method.

  13. A pseudo-spectral collocation method applied to the problem of convective diffusive transport in fluids subject to unsteady residual accelerations

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan; Ouazzani, Jalil

    1989-01-01

    The problem of determining the sensitivity of Bridgman-Stockbarger directional solidification experiments to residual accelerations of the type associated with spacecraft in low earth orbit is analyzed numerically using a pseudo-spectral collocation method. The approach employs a novel iterative scheme combining the method of artificial compressibility and a generalized ADI method. The results emphasize the importance of the consideration of residual accelerations and careful selection of the operating conditions in order to take full advantages of the low gravity conditions.

  14. A fractional factorial probabilistic collocation method for uncertainty propagation of hydrologic model parameters in a reduced dimensional space

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.

    2015-10-01

    In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.

  15. A nonclassical Radau collocation method for solving the Lane-Emden equations of the polytropic index 4.75 ≤ α < 5

    NASA Astrophysics Data System (ADS)

    Tirani, M. D.; Maleki, M.; Kajani, M. T.

    2014-11-01

    A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.

  16. Simulation of non-linear free surface motions in a cylindrical domain using a Chebyshev-Fourier spectral collocation method

    NASA Astrophysics Data System (ADS)

    Chern, M. J.; Borthwick, A. G. L.; Eatock Taylor, R.

    2001-06-01

    When a liquid is perturbed, its free surface may experience highly non-linear motions in response. This paper presents a numerical model of the three-dimensional hydrodynamics of an inviscid liquid with a free surface. The mathematical model is based on potential theory in cylindrical co-ordinates with a -transformation applied between the bed and free surface in the vertical direction. Chebyshev spectral elements discretize space in the vertical and radial directions; Fourier spectral elements are used in the angular direction. Higher derivatives are approximated using a collocation (or pseudo-spectral) matrix method. The numerical scheme is validated for non-linear transient sloshing waves in a cylindrical tank containing a circular surface-piercing cylinder at its centre. Excellent agreement is obtained with Ma and Wu's [Second order transient waves around a vertical cylinder in a tank. Journal of Hydrodynamics 1995; Ser. B4: 72-81] second-order potential theory. Further evidence for the capability of the scheme to predict complicated three-dimensional, and highly non-linear, free surface motions is given by the evolution of an impulse wave in a cylindrical tank and in an open domain. Copyright

  17. Modeling diffusion in random heterogeneous media: Data-driven models, stochastic collocation and the variational multiscale method

    SciTech Connect

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2007-09-10

    In recent years, there has been intense interest in understanding various physical phenomena in random heterogeneous media. Any accurate description/simulation of a process in such media has to satisfactorily account for the twin issues of randomness as well as the multilength scale variations in the material properties. An accurate model of the material property variation in the system is an important prerequisite towards complete characterization of the system response. We propose a general methodology to construct a data-driven, reduced-order model to describe property variations in realistic heterogeneous media. This reduced-order model then serves as the input to the stochastic partial differential equation describing thermal diffusion through random heterogeneous media. A decoupled scheme is used to tackle the problems of stochasticity and multilength scale variations in properties. A sparse-grid collocation strategy is utilized to reduce the solution of the stochastic partial differential equation to a set of deterministic problems. A variational multiscale method with explicit subgrid modeling is used to solve these deterministic problems. An illustrative example using experimental data is provided to showcase the effectiveness of the proposed methodology.

  18. Systematic assessment of the uncertainty in integrated surface water-groundwater modeling based on the probabilistic collocation method

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Zheng, Yi; Tian, Yong; Wu, Xin; Yao, Yingying; Han, Feng; Liu, Jie; Zheng, Chunmiao

    2014-07-01

    Systematic uncertainty analysis (UA) has rarely been conducted for integrated modeling of surface water-groundwater (SW-GW) systems, which is subject to significant uncertainty, especially at a large basin scale. The main objective of this study was to explore an innovative framework in which a systematic UA can be effectively and efficiently performed for integrated SW-GW models of large river basins and to illuminate how process understanding, model calibration, data collection, and management can benefit from such a systematic UA. The framework is based on the computationally efficient Probabilistic Collocation Method (PCM) linked with a complex simulation model. The applicability and advantages of the framework were evaluated and validated through an integrated SW-GW model for the Zhangye Basin in the middle Heihe River Basin, northwest China. The framework for systematic UA allows for a holistic assessment of the modeling uncertainty, yielding valuable insights into the hydrological processes, model structure, data deficit, and potential effectiveness of management. The study shows that, under the complex SW-GW interactions, the modeling uncertainty has great spatial and temporal variabilities and is highly output-dependent. Overall, this study confirms that a systematic UA should play a critical role in integrated SW-GW modeling of large river basins, and the PCM-based approach is a promising option to fulfill this role.

  19. A Fully Relativistic B-Spline R-Matrix Method for Electron and Photon Collisions with Atoms and Ions

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus

    2008-10-01

    We have extended our B-spline R-matrix (close-coupling) method [1] to fully account for relativistic effects in a Dirac-Coulomb formulation. Our numerical implementation of the close-coupling method enables us to construct term-dependent, non-orthogonal sets of one-electron orbitals for the bound and continuum electrons. This is a critical aspect for complex targets, where individually optimized one-electron orbitals can significantly reduce the size of the multi-configuration expansions needed for an accurate target description. Core-valence correlation effets are treated fully ab initio, rather than through semi-empirical model potentials. The method is described in detail and will be illustrated by comparing our theoretical predictions for e-Cs collisions [2] with benchmark experiments for angle-integrated and angle-differential cross sections [3], various spin-dependent scattering asymmetries [4], and Stokes parameters measured in superelastic collisions with laser-excited atoms [5]. [1] O. Zatsarinny, Comp. Phys. Commun. 174, 273 (2006). [2] O. Zatsarinny and K. Bartschat, Phys. Rev. A 77, 062701 (2008). [3] W. Gehenn and E. Reichert, J. Phys. B 10, 3105 (1977). [4] G. Baum et al., Phys. Rev. A 66, 022705 (2002) and 70, 012707 (2004). [5] D.S. Slaughter et al., Phys. Rev. A 75, 062717 (2007).

  20. A Full-Relativistic B-Spline R-Matrix Method for Electron and Photon Collisions with Atoms and Ions

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus

    2008-05-01

    We have extended our B-spline R-matrix (close-coupling) method [1] to fully account for relativistic effects in a Dirac-Coulomb formulation. Our numerical implementation of the close-coupling method enables us to construct term-dependent, non-orthogonal sets of one-electron orbitals for the bound and continuum electrons. This is a critical aspect for complex targets, where individually optimized one-electron orbitals can significantly reduce the size of the multi-configuration expansions needed for an accurate target description. Furthermore, core-valence correlation effets are treated fully ab initio, rather than through semi-empirical, and usually local, model potentials. The method will be described in detail and illustrated by comparing our theoretical predictions for e-Cs collisions with benchmark experiments for angle-integrated and angle-differential cross sections [2], various spin-dependent scattering asymmetries [3], and Stokes parameters measured in superelastic collisions with laser-excited atoms [4]. [1] O. Zatsarinny, Comp. Phys. Commun. 174, 273 (2006). [2] W. Gehenn and E. Reichert, J. Phys. B 10, 3105 (1977). [3] G. Baum et al., Phys. Rev. A 66, 022705 (2002) and 70, 012707 (2004). [4] D.S. Slaughter et al., Phys. Rev. A 75, 062717 (2007).

  1. Spline approximation of quantile functions

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1983-01-01

    The study reported here explored the development and utility of a spline representation of the sample quantile function of a continuous probability distribution in providing a functional description of a random sample and a method of generating random variables. With a spline representation, the random samples are generated by transforming a sample of uniform random variables to the interval of interest. This is useful, for example, in simulation studies in which a random sample represents the only known information about the distribution. The spline formulation considered here consists of a linear combination of cubic basis splines (B-splines) fit in a least squares sense to the sample quantile function using equally spaced knots. The following discussion is presented in five parts. The first section highlights major results realized from the study. The second section further details the results obtained. The methodology used is described in the third section, followed by a brief discussion of previous research on quantile functions. Finally, the results of the study are evaluated.

  2. Error estimation in high dimensional space for stochastic collocation methods on arbitrary sparse samples

    NASA Astrophysics Data System (ADS)

    Archibald, Rick

    2013-10-01

    We have develop a fast method that can give high order error estimates of piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used polynomial annihilation to estimate the smoothness of local regions of arbitrary samples in annihilation stochastic simulations. We compare the error estimation of this method to gaussian process error estimation techniques.

  3. Application of Collocation Spectral Method for Irregular Convective-Radiative Fins with Temperature-Dependent Internal Heat Generation and Thermal Properties

    NASA Astrophysics Data System (ADS)

    Sun, Ya-Song; Ma, Jing; Li, Ben-Wen

    2015-11-01

    A collocation spectral method (CSM) is developed to solve the fin heat transfer in triangular, trapezoidal, exponential, concave parabolic, and convex geometries. In the thermal process of fin heat transfer, fin dissipates heat to environment by convection and radiation; internal heat generation, thermal conductivity, heat transfer coefficient, and surface emissivity are functions of temperature; ambient fluid temperature and radiative sink temperature are considered to be nonzero. The temperature in the fin is approximated by Chebyshev polynomials and spectral collocation points. Thus, the differential form of energy equation is transformed into the matrix form of algebraic equation. In order to test efficiency and accuracy of the developed method, five types of convective-radiative fins are examined. Results obtained by the CSM are assessed by comparing available results in references. These comparisons indicate that the CSM can be recommended as a good option to simulate and predict thermal performance of the convective-radiative fins.

  4. Combining geodesic interpolating splines and affine transformations.

    PubMed

    Younes, Laurent

    2006-05-01

    Geodesic spline interpolation is a simple and efficient approach for landmark matching by nonambiguous mappings (diffeomorphisms), combining classic spline interpolation and flows of diffeomorphisms. Here, we extend the method to incorporate the estimation of a affine transformation, yielding a consistent and numerically stable algorithm. A theoretical justification is provided by studying the existence of the global minimum of the energy. PMID:16671292

  5. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional MCMC sim- ulations. The computational efficiency is expected to be more beneficial to more computational expensive groundwater problems.

  6. Conforming Chebyshev spectral collocation methods for the solution of laminar flow in a constricted channel

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1990-01-01

    The numerical simulation of steady planar two-dimensional, laminar flow of an incompressible fluid through an abruptly contracting channel using spectral domain decomposition methods is described. The key features of the method are the decomposition of the flow region into a number of rectangular subregions and spectral approximations which are pointwise C(1) continuous across subregion interfaces. Spectral approximations to the solution are obtained for Reynolds numbers in the range 0 to 500. The size of the salient corner vortex decreases as the Reynolds number increases from 0 to around 45. As the Reynolds number is increased further the vortex grows slowly. A vortex is detected downstream of the contraction at a Reynolds number of around 175 that continues to grow as the Reynolds number is increased further.

  7. Incidental Learning of Collocation

    ERIC Educational Resources Information Center

    Webb, Stuart; Newton, Jonathan; Chang, Anna

    2013-01-01

    This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.…

  8. High Order Continuous Approximation for the Top Order Methods

    NASA Astrophysics Data System (ADS)

    Mazzia, Francesca; Sestini, Alessandra; Trigiante, Donato

    2007-09-01

    The Top Order Methods are a class of linear multistep schemes to be used as Boundary Value Methods and with the feature of having maximal order (2k if k is the number of steps). This often implies that accurate numerical approximations of general BVPs can be produced just using the 3-step TOM. In this work, we consider two different possibilities for defining a continuous approximation of the numerical solution, the standard C1 cubic spline collocating the differential equation at the knots and a C2k-1 spline of degree 2k. The computation of the B-spline coefficients of this higher degree spline requires the solution of N+2k banded linear systems of size 4k×4k. The resulting B-spline function is convergent of order 2k to the exact solution of the continuous BVPs.

  9. Computer program for fitting low-order polynomial splines by method of least squares

    NASA Technical Reports Server (NTRS)

    Smith, P. J.

    1972-01-01

    FITLOS is computer program which implements new curve fitting technique. Main program reads input data, calls appropriate subroutines for curve fitting, calculates statistical analysis, and writes output data. Method was devised as result of need to suppress noise in calibration of multiplier phototube capacitors.

  10. Shape identification technique for a two-dimensional elliptic system by boundary integral equation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1989-01-01

    The geometrical structure of the boundary shape for a two-dimensional boundary value problem is identified. The output least square identification method is considered for estimating partially unknown boundary shapes. A numerical parameter estimation technique using the spline collocation method is proposed.

  11. Stability Analysis of Parametrically Excited Systems Using Spectral Collocation

    NASA Astrophysics Data System (ADS)

    Lee, K.-Y.; Renshaw, A. A.

    2002-12-01

    The spectral collocation method is used to determine the stability of parametrically excited systems and compared with the traditional transition matrix approach. Results from a series of test problems demonstrate that spectral collocation converges rapidly. In addition, the spectral collocation method preserves the sparsity of the underlying system matrices, a property not shared by the transition matrix approach. As a result, spectral collocation can be used for very large systems and can utilize sparse eigensolvers to reduce computational memory and time. For the large-scale system studied (up to 40 degrees of freedom), the spectral collocation method was on average an order of magnitude faster than the transition matrix approach using Matlab. This computational advantage is implementation specific; in a C implementation of the algorithm, the transition matrix method is faster than the spectral collocation. Overall, the method proves to be simple, efficient, reliable, and generally competitive with the transition matrix method.

  12. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    SciTech Connect

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy search algorithm to find and delete redundant knots based on the estimation of a weight associated with each basis vector. The overall algorithm iterates by inserting and deleting knots and end up with much fewer knots than pixels to represent the object, while the estimation error is within a certain tolerance. Thus, an efficient reconstruction can be obtained which significantly reduces the complexity of the problem. In this thesis, the adaptive B-Spline method is applied to a cross-well tomography problem. The problem comes from the application of finding underground pollution plumes. Cross-well tomography method is applied by placing arrays of electromagnetic transmitters and receivers along the boundaries of the interested region. By utilizing inverse scattering method, a linear inverse model is set up and furthermore the adaptive B-Spline method described above is applied. The simulation results show that the B-Spline method reduces the dimensional complexity by 90%, compared with that o f a pixel-based method, and decreases time complexity by 50% without significantly degrading the estimation.

  13. An efficient, high-order probabilistic collocation method on sparse grids for three-dimensional flow and solute transport in randomly heterogeneous porous media

    SciTech Connect

    Lin, Guang; Tartakovsky, Alexandre M.

    2009-05-01

    In this study, a probabilistic collocation method (PCM) on sparse grids was used to solve stochastic equations describing flow and transport in three-dimensional in saturated, randomly heterogeneous porous media. Karhunen-Lo\\`{e}ve (KL) decomposition was used to represent the three-dimensional log hydraulic conductivity $Y=\\ln K_s$. The hydraulic head $h$ and average pore-velocity $\\bf v$ were obtained by solving the three-dimensional continuity equation coupled with Darcy's law with random hydraulic conductivity field. The concentration was computed by solving a three-dimensional stochastic advection-dispersion equation with stochastic average pore-velocity $\\bf v$ computed from Darcy's law. PCM is an extension of the generalized polynomial chaos (gPC) that couples gPC with probabilistic collocation. By using the sparse grid points, PCM can handle a random process with large number of random dimensions, with relatively lower computational cost, compared to full tensor products. Monte Carlo (MC) simulations have also been conducted to verify accuracy of the PCM. By comparing the MC and PCM results for mean and standard deviation of concentration, it is evident that the PCM approach is computational more efficient than Monte Carlo simulations. Unlike the conventional moment-equation approach, there is no limitation on the amplitude of random perturbation in PCM. Furthermore, PCM on sparse grids can efficiently simulate solute transport in randomly heterogeneous porous media with large variances.

  14. Uncertainty Quantification in Dynamic Simulations of Large-scale Power System Models using the High-Order Probabilistic Collocation Method on Sparse Grids

    SciTech Connect

    Lin, Guang; Elizondo, Marcelo A.; Lu, Shuai; Wan, Xiaoliang

    2014-01-01

    This paper proposes a probabilistic collocation method (PCM) to quantify the uncertainties with dynamic simulations in power systems. The appraoch was tested on a single-machine-infinite-bus system and the over 15,000 -bus Western Electricity Coordinating Council (WECC) system. Comparing to classic Monte-Carlo (MC) method, the proposed PCM applies the Smolyak algorithm to reduce the number of simulations that have to be performed. Therefore, the computational cost can be greatly reduced using PCM. The algorithm and procedures are described in the paper. Comparison was made with MC method on the single machine as well as the WECC system. The simulation results shows that using PCM only a small number of sparse grid points need to be sampled even when dealing with systems with a relatively large number of uncertain parameters. PCM is, therefore, computationally more efficient than MC method.

  15. Non polynomial B-splines

    NASA Astrophysics Data System (ADS)

    Laksâ, Arne

    2015-11-01

    B-splines are the de facto industrial standard for surface modelling in Computer Aided design. It is comparable to bend flexible rods of wood or metal. A flexible rod minimize the energy when bending, a third degree polynomial spline curve minimize the second derivatives. B-spline is a nice way of representing polynomial splines, it connect polynomial splines to corner cutting techniques, which induces many nice and useful properties. However, the B-spline representation can be expanded to something we can call general B-splines, i.e. both polynomial and non-polynomial splines. We will show how this expansion can be done, and the properties it induces, and examples of non-polynomial B-spline.

  16. Splines: a perfect fit for medical imaging

    NASA Astrophysics Data System (ADS)

    Unser, Michael A.

    2002-05-01

    Splines, which were invented by Schoenberg more than fifty years ago, constitute an elegant framework for dealing with interpolation and discretization problems. They are widely used in computer-aided design and computer graphics, but have been neglected in medical imaging applications, mostly as a consequence of what one may call the bad press phenomenon. Thanks to some recent research efforts in signal processing and wavelet-related techniques, the virtues of splines have been revived in our community. There is now compelling evidence (several independent studies) that splines offer the best cost-performance tradeoff among available interpolation methods. In this presentation, we will argue that the spline representation is ideally suited for all processing tasks that require a continuous model of signals or images. We will show that most forms of spline fitting (interpolation, least squares approximation, smoothing splines) can be performed most efficiently using recursive digital filters. We will also have a look at their multiresolution properties which make them prime candidates for constructing wavelet bases and computing image pyramids. Typical application areas where these techniques can be useful are: image reconstruction from projection data, sampling grid conversion, geometric correction, visualization, rigid or elastic image registration, and feature extraction including edge detection and active contour models.

  17. RATIONAL SPLINE SUBROUTINES

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1994-01-01

    Scientific data often contains random errors that make plotting and curve-fitting difficult. The Rational-Spline Approximation with Automatic Tension Adjustment algorithm lead to a flexible, smooth representation of experimental data. The user sets the conditions for each consecutive pair of knots:(knots are user-defined divisions in the data set) to apply no tension; to apply fixed tension; or to determine tension with a tension adjustment algorithm. The user also selects the number of knots, the knot abscissas, and the allowed maximum deviations from line segments. The selection of these quantities depends on the actual data and on the requirements of a particular application. This program differs from the usual spline under tension in that it allows the user to specify different tension values between each adjacent pair of knots rather than a constant tension over the entire data range. The subroutines use an automatic adjustment scheme that varies the tension parameter for each interval until the maximum deviation of the spline from the line joining the knots is less than or equal to a user-specified amount. This procedure frees the user from the drudgery of adjusting individual tension parameters while still giving control over the local behavior of the spline The Rational Spline program was written completely in FORTRAN for implementation on a CYBER 850 operating under NOS. It has a central memory requirement of approximately 1500 words. The program was released in 1988.

  18. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  19. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    SciTech Connect

    M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  20. SURVIVAL ESTIMATION USING SPLINES

    EPA Science Inventory

    A non parametric maximum likelihood procedure is given for estimating the survivor function from right-censored data. t approximates the hazard rate by a simple function such as a spline, with different approximations yielding different estimators. pecial case is that proposed by...

  1. Smoothing spline ANOVA decomposition of arbitrary splines: an application to eye movements in reading.

    PubMed

    Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias

    2015-01-01

    The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading. PMID:25816246

  2. Efficient Temperature-Dependent Green's Function Methods for Realistic Systems: Using Cubic Spline Interpolation to Approximate Matsubara Green's Functions.

    PubMed

    Kananenka, Alexei A; Welden, Alicia Rae; Lan, Tran Nguyen; Gull, Emanuel; Zgid, Dominika

    2016-05-10

    The popular, stable, robust, and computationally inexpensive cubic spline interpolation algorithm is adopted and used for finite temperature Green's function calculations of realistic systems. We demonstrate that with appropriate modifications the temperature dependence can be preserved while the Green's function grid size can be reduced by about 2 orders of magnitude by replacing the standard Matsubara frequency grid with a sparser grid and a set of interpolation coefficients. We benchmarked the accuracy of our algorithm as a function of a single parameter sensitive to the shape of the Green's function. Through numerous examples, we confirmed that our algorithm can be utilized in a systematically improvable, controlled, and black-box manner and highly accurate one- and two-body energies and one-particle density matrices can be obtained using only around 5% of the original grid points. Additionally, we established that to improve accuracy by an order of magnitude, the number of grid points needs to be doubled, whereas for the Matsubara frequency grid, an order of magnitude more grid points must be used. This suggests that realistic calculations with large basis sets that were previously out of reach because they required enormous grid sizes may now become feasible. PMID:27049642

  3. Accuracy of a Mitral Valve Segmentation Method Using J-Splines for Real-Time 3D Echocardiography Data

    PubMed Central

    Siefert, Andrew W.; Icenogle, David A.; Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Rossignac, Jarek; Lerakis, Stamatios; Yoganathan, Ajit P.

    2013-01-01

    Patient-specific models of the heart’s mitral valve (MV) exhibit potential for surgical planning. While advances in 3D echocardiography (3DE) have provided adequate resolution to extract MV leaflet geometry, no study has quantitatively assessed the accuracy of their modeled leaflets versus a ground-truth standard for temporal frames beyond systolic closure or for differing valvular dysfunctions. The accuracy of a 3DE-based segmentation methodology based on J-splines was assessed for porcine MVs with known 4D leaflet coordinates within a pulsatile simulator during closure, peak closure, and opening for a control, prolapsed, and billowing MV model. For all time points, the mean distance error between the segmented models and ground-truth data were 0.40±0.32 mm, 0.52±0.51 mm, and 0.74±0.69 mm for the control, flail, and billowing models. For all models and temporal frames, 95% of the distance errors were below 1.64 mm. When applied to a patient data set, segmentation was able to confirm a regurgitant orifice and post-operative improvements in coaptation. This study provides an experimental platform for assessing the accuracy of an MV segmentation methodology at phases beyond systolic closure and for differing MV dysfunctions. Results demonstrate the accuracy of a MV segmentation methodology for the development of future surgical planning tools. PMID:23460042

  4. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  5. {L(1}) Control Theoretic Smoothing Splines

    NASA Astrophysics Data System (ADS)

    Nagahara, Masaaki; Martin, Clyde F.

    2014-11-01

    In this paper, we propose control theoretic smoothing splines with L1 optimality for reducing the number of parameters that describes the fitted curve as well as removing outlier data. A control theoretic spline is a smoothing spline that is generated as an output of a given linear dynamical system. Conventional design requires exactly the same number of base functions as given data, and the result is not robust against outliers. To solve these problems, we propose to use L1 optimality, that is, we use the L1 norm for the regularization term and/or the empirical risk term. The optimization is described by a convex optimization, which can be efficiently solved via a numerical optimization software. A numerical example shows the effectiveness of the proposed method.

  6. Spline screw payload fastening system

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.

  7. Spline screw payload fastening system

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1993-09-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.

  8. Spline screw payload fastening system

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1992-09-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.

  9. Clinical Trials: Spline Modeling is Wonderful for Nonlinear Effects.

    PubMed

    Cleophas, Ton J

    2016-01-01

    Traditionally, nonlinear relationships like the smooth shapes of airplanes, boats, and motor cars were constructed from scale models using stretched thin wooden strips, otherwise called splines. In the past decades, mechanical spline methods have been replaced with their mathematical counterparts. The objective of the study was to study whether spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial and also to study whether it can detect patterns in a trial that are relevant but go unobserved with simpler regression models. A clinical trial assessing the effect of quantity of care on quality of care was used as an example. Spline curves consistent of 4 or 5 cubic functions were applied. SPSS statistical software was used for analysis. The spline curves of our data outperformed the traditional curves because (1) unlike the traditional curves, they did not miss the top quality of care given in either subgroup, (2) unlike the traditional curves, they, rightly, did not produce sinusoidal patterns, and (3) unlike the traditional curves, they provided a virtually 100% match of the original values. We conclude that (1) spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial; (2) spline modeling can detect patterns in a trial that are relevant but may go unobserved with simpler regression models; (3) in clinical research, spline modeling has great potential given the presence of many nonlinear effects in this field of research and given its sophisticated mathematical refinement to fit any nonlinear effect in the mostly accurate way; and (4) spline modeling should enable to improve making predictions from clinical research for the benefit of health decisions and health care. We hope that this brief introduction to spline modeling will stimulate clinical investigators to start using this wonderful method. PMID:23689089

  10. Mathematical research on spline functions

    NASA Technical Reports Server (NTRS)

    Horner, J. M.

    1973-01-01

    One approach in spline functions is to grossly estimate the integrand in J and exactly solve the resulting problem. If the integrand in J is approximated by Y" squared, the resulting problem lends itself to exact solution, the familiar cubic spline. Another approach is to investigate various approximations to the integrand in J and attempt to solve the resulting problems. The results are described.

  11. Deflation-accelerated preconditioning of the Poisson-Neumann Schur problem on long domains with a high-order discontinuous element-based collocation method

    NASA Astrophysics Data System (ADS)

    Joshi, Sumedh M.; Thomsen, Greg N.; Diamessis, Peter J.

    2016-05-01

    A combination of block-Jacobi and deflation preconditioning is used to solve a high-order discontinuous element-based collocation discretization of the Schur complement of the Poisson-Neumann system as arises in the operator splitting of the incompressible Navier-Stokes equations. The preconditioners and deflation vectors are chosen to mitigate the effects of ill-conditioning due to highly-elongated domains typical of simulations of strongly non-hydrostatic environmental flows, and to achieve Generalized Minimum RESidual method (GMRES) convergence independent of the size of the number of elements in the long direction. The ill-posedness of the Poisson-Neumann system manifests as an inconsistency of the Schur complement problem, but it is shown that this can be accounted for with appropriate projections out of the null space of the Schur complement matrix without affecting the accuracy of the solution. The block-Jacobi preconditioner is shown to yield GMRES convergence independent of the polynomial order and only weakly dependent on the number of elements within a subdomain in the decomposition. The combined deflation and block-Jacobi preconditioning is compared with two-level non-overlapping block-Jacobi preconditioning of the Schur problem, and while both methods achieve convergence independent of the grid size, deflation is shown to require half as many GMRES iterations and 25% less wall-clock time for a variety of grid sizes and domain aspect ratios. The deflation methods shown to be effective for the two-dimensional Poisson-Neumann problem are extensible to the three-dimensional problem assuming a Fourier discretization in the third dimension. A Fourier discretization results in a two-dimensional Helmholtz problem for each Fourier component that is solved using deflated block-Jacobi preconditioning on its Schur complement. Here again deflation is shown to be superior to two-level non-overlapping block-Jacobi preconditioning, requiring about half as many GMRES iterations and 15% less time.

  12. Examination of Absorbing Boundary Condition Using Perfectly Matched Layer for Collocated Orthogonal Grid in Method of Characteristics

    NASA Astrophysics Data System (ADS)

    Adachi, Junpei; Okubo, Kan; Tagawa, Norio; Tsuchiya, Takao; Ishizuka, Takashi

    2013-07-01

    The time domain numerical analysis of sound wave propagation has been performed widely as a result of computer development. The method of characteristics (MOC) is used as a time domain numerical analysis method. In multidimensional MOC sound wave analysis, the so-called automatically absorbing boundary (without additional outer boundary treatment) does not have excellent absorbing performance. To overcome this problem, we introduce the perfectly matched layer (PML) technique into MOC simulation. Through this study, it is clarified that the PML (L = 16) reflection at a vertical incidence is approximately 22 dB lower than the automatically absorbing boundary (without PML) in the simulation by the QUICKEST method.

  13. Mercury vapor air-surface exchange measured by collocated micrometeorological and enclosure methods - Part II: Bias and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2015-02-01

    Dynamic flux chambers (DFCs) and micrometeorological (MM) methods are extensively deployed for gauging air-surface Hg0 gas exchange. However, a systematic evaluation of the precision of the contemporary Hg0 flux quantification methods is not available. In this study, the uncertainty in Hg0 flux measured by relaxed eddy accumulation (REA) method, aerodynamic gradient method (AGM), modified Bowen-ratio (MBR) method, as well as DFC of traditional (TDFC) and novel (NDFC) designs is assessed using a robust data-set from two field intercomparison campaigns. The absolute precision in Hg0 concentration difference (Δ C) measurements is estimated at 0.064 ng m-3 for the gradient-based MBR and AGM system. For the REA system, the parameter is Hg0 concentration (C) dependent at 0.069+0.022C. 57 and 62% of the individual vertical gradient measurements were found to be significantly different from zero during the campaigns, while for the REA-technique the percentage of significant observations was lower. For the chambers, non-significant fluxes are confined to a few nighttime periods with varying ambient Hg0 concentration. Relative bias for DFC-derived fluxes is estimated to be ~ ±10%, and ~ 85% of the flux bias are within ±2 ng m-2 h-1 in absolute term. The DFC flux bias follows a diurnal cycle, which is largely dictated by temperature controls on the enclosed volume. Due to contrasting prevailing micrometeorological conditions, the relative uncertainty (median) in turbulent exchange parameters differs by nearly a factor of two between the campaigns, while that in Δ C measurements is fairly stable. The estimated flux uncertainties for the triad of MM-techniques are 16-27, 12-23 and 19-31% (interquartile range) for the AGM, MBR and REA method, respectively. This study indicates that flux-gradient based techniques (MBR and AGM) are preferable to REA in quantifying Hg0 flux over ecosystems with low vegetation height. A limitation of all Hg0 flux measurement systems investigated is their incapability to obtain synchronous samples for the calculation of Δ C. This reduces the precision of flux quantification, particularly the MM-systems under non-stationarity of ambient Hg0 concentration. For future applications, it is recommended to accomplish Δ C derivation from simultaneous collected samples.

  14. Theory, computation, and application of exponential splines

    NASA Technical Reports Server (NTRS)

    Mccartin, B. J.

    1981-01-01

    A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.

  15. Spline screw autochanger

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1993-06-01

    A captured nut member is located within a tool interface assembly and being actuated by a spline screw member driven by a robot end effector. The nut member lowers and rises depending upon the directional rotation of the coupling assembly. The captured nut member further includes two winged segments which project outwardly in diametrically opposite directions so as to engage and disengage a clamping surface in the form of a chamfered notch respectively provided on the upper surface of a pair of parallel forwardly extending arm members of a bifurcated tool stowage holster which is adapted to hold and store a robotic tool including its end effector interface when not in use. A forward and backward motion of the robot end effector operates to insert and remove the tool from the holster.

  16. Fitting multidimensional splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  17. Monotonicity preserving using GC1 rational quartic spline

    NASA Astrophysics Data System (ADS)

    Karim, Samsul Ariffin Abdul; Pang, Kong Voon

    2012-09-01

    This paper proposed GC1 rational quartic spline (quartic numerator and linear denominator) with two parameters to preserve the shape of the monotone data. Simple data dependent constraints will be derived on one of the parameters while the other is free to modify and refine the resultant shape of the data. Both parameters are independent to each other. The method under consideration here, avoid the modification of the derivative when the sufficient condition for the monotonicity are violated as can be noticed in the original construction of C1 rational quartic spline with linear denominator. Numerical comparison between the proposed scheme and C1 rational quartic spline will be given.

  18. An experimental determination of the effects of blending collocated and non-collocated sensor measurements to control a flexible structure

    NASA Astrophysics Data System (ADS)

    Cobb, Richard G.

    1992-12-01

    An experimental investigation was performed controlling a cantilevered beam in bending using proportional feedback on a blend of collocated and non-collocated sensor measurements using a single actuator. Exact transfer functions between the control input and the measurements were developed and compared to the finite element method. By analyzing the open loop pole-zero locations as a function of the measurement blending, insight into the closed loop behavior is obtained. Minimum phase behavior can be maintained for a range of the blending ratio. Results using blended control were compared to both proportional collocated feedback and compensated collocated feedback using LQG methods. The comparison was then extended to a theoretical investigation using the blended measurements on a free-free torsional model analogous to a gimbal. Advantages and limitations of using blended measurements are presented.

  19. Analysis of crustal structure of Venus utilizing residual Line-of-Sight (LOS) gravity acceleration and surface topography data. A trial of global modeling of Venus gravity field using harmonic spline method

    NASA Technical Reports Server (NTRS)

    Fang, Ming; Bowin, Carl

    1992-01-01

    To construct Venus' gravity disturbance field (or gravity anomaly) with the spacecraft-observer line of site (LOS) acceleration perturbation data, both a global and a local approach can be used. The global approach, e.g., spherical harmonic coefficients, and the local approach, e.g., the integral operator method, based on geodetic techniques are generally not the same, so that they must be used separately for mapping long wavelength features and short wavelength features. Harmonic spline, as an interpolation and extrapolation technique, is intrinsically flexible to both global and local mapping of a potential field. Theoretically, it preserves the information of the potential field up to the bound by sampling theorem regardless of whether it is global or local mapping, and is never bothered with truncation errors. The improvement of harmonic spline methodology for global mapping is reported. New basis functions, a singular value decomposition (SVD) based modification to Parker & Shure's numerical procedure, and preliminary results are presented.

  20. Adaptive regression splines in the Cox model.

    PubMed

    LeBlanc, M; Crowley, J

    1999-03-01

    We develop a method for constructing adaptive regression spline models for the exploration of survival data. The method combines Cox's (1972, Journal of the Royal Statistical Society, Series B 34, 187-200) regression model with a weighted least-squares version of the multivariate adaptive regressi on spline (MARS) technique of Friedman (1991, Annals of Statistics 19, 1-141) to adaptively select the knots and covariates. The new technique can automatically fit models with terms that represent nonlinear effects and interactions among covariates. Applications based on simulated data and data from a clinical trial for myeloma are presented. Results from the myeloma application identified several important prognostic variables, including a possible nonmonotone relationship with survival in one laboratory variable. Results are compared to those from the adaptive hazard regression (HARE) method of Kooperberg, Stone, and Truong (1995, Journal of the American Statistical Association 90, 78-94). PMID:11318156

  1. Surface deformation over flexible joints using spline blending techniques

    NASA Astrophysics Data System (ADS)

    Haavardsholm, Birgitte; Bratlie, Jostein; Dalmo, Rune

    2014-12-01

    Skinning over a skeleton joint is the process of skin deformation based on joint transformation. Popular geometric skinning techniques include implicit linear blending and dual quaternions. Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended by Ck-smooth basis functions. A smooth skinning surface can be constructed over a transformable skeleton joint by combining various types of local surface constructions and applying local Hermite interpolation. Compared to traditional spline methods, increased flexibility and local control with respect to surface deformation can be achieved using the GERBS blending construction. We present a method using a blending-type spline surface for skinning over a flexible joint, where local geometry is individually adapted to achieve natural skin deformation based on skeleton transformations..

  2. C1 Hermite shape preserving polynomial splines in R3

    NASA Astrophysics Data System (ADS)

    Gabrielides, Nikolaos C.

    2012-06-01

    The C 2 variable degree splines1-3 have been proven to be an efficient tool for solving the curve shape-preserving interpolation problem in two and three dimensions. Based on this representation, the current paper proposes a Hermite interpolation scheme, to construct C 1 shape-preserving splines of variable degree. After this, a slight modification of the method leads to a C 1 shape-preserving Hermite cubic spline. Both methods can easily be developed within a CAD system, since they compute directly (without iterations) the B-spline control polygon. They have been implemented and tested within the DNV Software CAD/CAE system GeniE. [Figure not available: see fulltext.

  3. Splines, contours and SVD subroutines

    SciTech Connect

    Solano, E.R. . Fusion Research Center); St. John, H. )

    1993-06-01

    Portability of Fortran code is a major concern these days, since hardware and commercial software change faster than the codes themselves. Hence, using public domain, portable, mathematical subroutines is imperative. Here we present a collection of subroutines we have used in the past, and found to be particularly useful. They are: 2-dimensional splines, contour tracing of flux surface (based on 2-D spline), and singular Value Matrix Decomposition (for Chi-square minimization).

  4. Evaluation of Least-Squares Collocation and the Reduced Point Mass method using the International Association of Geodesy, Joint Study Group 0.3 test data.

    NASA Astrophysics Data System (ADS)

    Tscherning, Carl Christian; Herceg, Matija

    2014-05-01

    The methods of Least-Squares Collocation (LSC) and the Reduced Point Mass method (RPM) both uses radial basis-functions for the representation of the anomalous gravity potential (T). LSC uses as many base-functions as the number of observations, while the RPM method uses as many as deemed necessary. Both methods have been evaluated and for some tests compared in the two areas (Central Europe and South-East Pacific). For both areas test data had been generated using EGM2008. As observational data (a) ground gravity disturbances, (b) airborne gravity disturbances, (c) GOCE like Second order radial derivatives and (d) GRACE along-track potential differences were available. The use of these data for the computation of values of (e) T in a grid was the target of the evaluation and comparison investigation. Due to the fact that T in principle can only be computed using global data, the remove-restore procedure was used, with EGM2008 subtracted (and later added to T) up to degree 240 using dataset (a) and (b) and up to degree 36 for datasets (c) and (d). The estimated coefficient error was accounted for when using LSC and in the calculation of error-estimates. The main result is that T was estimated with an error (computed minus control data, (e) from which EGM2008 to degree 240 or 36 had been subtracted ) as found in the table (LSC used): Area Europe Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.0 0.0 -0.1 -0.1 -0.3 -1.8 Standard deviation4.1 0.8 2.7 32.6 6.0 19.2 Max. difference 19.9 10.4 16.9 69.9 31.3 47.0 Min.difference -16.2 -3.7 -15.5 -92.1 -27.8 -65.5 Area Pacific Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.1 -0.1 -0.1 4.6 -0.2 0.2 Standard deviation4.8 0.2 1.9 49.1 6.7 18.6 Max.difference 22.2 1.8 13.4 115.5 26.9 26.5 Min.difference -28.7 -3.1 -15.7 -106.4 -33.6 22.1 The result using RPM with data-sets (a), (b), (c) gave comparable results. The use of (d) with the RPM method is being implemented. Tests were also done computing dataset (a) from the other datasets. The results here may serve as a bench-mark for other radial basis-function implementations for computing approximations to T. Improvements are certainly possible, e.g. by taking the topography and bathymetry into account.

  5. Learning Collocations: Do the Number of Collocates, Position of the Node Word, and Synonymy Affect Learning?

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2011-01-01

    This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…

  6. Collocations: A Neglected Variable in EFL.

    ERIC Educational Resources Information Center

    Farghal, Mohammed; Obiedat, Hussein

    1995-01-01

    Addresses the issue of collocations as an important and neglected variable in English-as-a-Foreign-Language classes. Two questionnaires, in English and Arabic, involving common collocations relating to food, color, and weather were administered to English majors and English language teachers. Results show both groups deficient in collocations. (36…

  7. Multi-quadric collocation model of horizontal crustal movement

    NASA Astrophysics Data System (ADS)

    Chen, G.; Zeng, A. M.; Ming, F.; Jing, Y. F.

    2015-11-01

    To establish the horizontal crustal movement velocity field of the Chinese mainland, a Hardy multi-quadric fitting model and collocation are usually used, but the kernel function, nodes, and smoothing factor are difficult to determine in the Hardy function interpolation, and in the collocation model the covariance function of the stochastic signal must be carefully constructed. In this paper, a new combined estimation method for establishing the velocity field, based on collocation and multi-quadric equation interpolation, is presented. The crustal movement estimation simultaneously takes into consideration an Euler vector as the crustal movement trend and the local distortions as the stochastic signals, and a kernel function of the multi-quadric fitting model substitutes for the covariance function of collocation. The velocities of a set of 1070 reference stations were obtained from the Crustal Movement Observation Network of China (CMONOC), and the corresponding velocity field established using the new combined estimation method. A total of 85 reference stations were used as check points, and the precision in the north and east directions was 1.25 and 0.80 mm yr-1, respectively. The result obtained by the new method corresponds with the collocation method and multi-quadric interpolation without requiring the covariance equation for the signals.

  8. Interlanguage Development and Collocational Clash

    ERIC Educational Resources Information Center

    Shahheidaripour, Gholamabbass

    2000-01-01

    Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…

  9. Analysis of chromatograph systems using orthogonal collocation

    NASA Technical Reports Server (NTRS)

    Woodrow, P. T.

    1974-01-01

    Research is generating fundamental engineering design techniques and concepts for the chromatographic separator of a chemical analysis system for an unmanned, Martian roving vehicle. A chromatograph model is developed which incorporates previously neglected transport mechanisms. The numerical technique of orthogonal collocation is studied. To establish the utility of the method, three models of increasing complexity are considered, the latter two being limiting cases of the derived model: (1) a simple, diffusion-convection model; (2) a rate of adsorption limited, inter-intraparticle model; and (3) an inter-intraparticle model with negligible mass transfer resistance.

  10. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    SciTech Connect

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  11. Asymmetric spline surfaces - Characteristics and applications. [in high quality optical systems design

    NASA Technical Reports Server (NTRS)

    Stacy, J. E.

    1984-01-01

    Asymmetric spline surfaces appear useful for the design of high-quality general optical systems (systems without symmetries). A spline influence function defined as the actual surface resulting from a simple perturbation in the spline definition array shows that a subarea is independent of others four or more points away. Optimization methods presented in this paper are used to vary a reflective spline surface near the focal plane of a decentered Schmidt-Cassegrain to reduce rms spot radii by a factor of 3 across the field.

  12. Polynominal Interpolation Methods for Viscous Flow Calculations

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and by Hermitian (Taylor series) finite-difference discretization. The similarities and special features of these different developments are discussed. The governing systems apply for both uniform and variable meshes. Hybrid schemes resulting from two different polynomial approximations for the first and second derivatives lead to a nonuniform mesh extension of the so-called compact or Pad? difference technique (Hermite 4). A variety of fourth-order methods are described and the Hermitian approach is extended to sixth-order (Hermite 6). The appropriate spline boundary conditions are derived for all procedures. For central finite differences, this leads to a two-point, second-order accurate generalization of the commonly used three-point end-difference formula. Solutions with several spline and Hermite procedures are presented for the boundary layer equations, with and without mass transfer, and for the incompressible viscous flow in a driven cavity. Divergence and nondivergence equations are considered for the cavity. Among the fourth-order techniques, it is shown that spline 4 has the smallest truncation error. The spline 4 procedure generally requires one-quarter the number of mesh points in a given coordinate direction as a central finite-difference calculation of equal accuracy. The Hermite 6 procedure leads to remarkably accurate boundary layer solutions.

  13. An Areal Isotropic Spline Filter for Surface Metrology

    PubMed Central

    Zhang, Hao; Tong, Mingsi; Chu, Wei

    2015-01-01

    This paper deals with the application of the spline filter as an areal filter for surface metrology. A profile (2D) filter is often applied in orthogonal directions to yield an areal filter for a three-dimensional (3D) measurement. Unlike the Gaussian filter, the spline filter presents an anisotropic characteristic when used as an areal filter. This disadvantage hampers the wide application of spline filters for evaluation and analysis of areal surface topography. An approximation method is proposed in this paper to overcome the problem. In this method, a profile high-order spline filter serial is constructed to approximate the filtering characteristic of the Gaussian filter. Then an areal filter with isotropic characteristic is composed by implementing the profile spline filter in the orthogonal directions. It is demonstrated that the constructed areal filter has two important features for surface metrology: an isotropic amplitude characteristic and no end effects. Some examples of applying this method on simulated and practical surfaces are analyzed.

  14. Collocating satellite-based radar and radiometer measurements - methodology and usage examples

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-02-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) onboard NOAA-18 and the Cloud Profiling Radar (CPR) onboard the CloudSat CPR. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.

  15. Collocating satellite-based radar and radiometer measurements - methodology and usage examples

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-06-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) on-board NOAA-18 and the Cloud Profiling Radar (CPR) on-board CloudSat. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.

  16. Blending type spline constructions: A brief overview

    NASA Astrophysics Data System (ADS)

    Pedersen, Aleksander; Bang, Børre

    2015-11-01

    In this paper we are presenting a brief overview of research on blending splines from 2004-2015. We discuss some of the properties which can be interesting to investigate when blending splines are used both for finite element analysis and geometry. Blending splines are constructions where local geometry is blended together by a blending function to create global geometry. The different basis functions has different properties, which can be related to different application areas. Example application areas where blending splines are utilized is listed, together with a focus on the basis and future work towards utilizing parts of blending splines in an isogeometric analysis(IGA) context.

  17. Recent advances in (soil moisture) triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....

  18. Collocating satellite-based radar and radiometer measurements - methodology and usage examples.

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-05-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) onboard NOAA-18 and the Cloud Profiling Radar (CPR) onboard the CloudSat. First, a simple method is presented to obtain those collocations. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to its centrepoint. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relationship between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations are available for public use.

  19. Collocation analysis for UMLS knowledge-based word sense disambiguation

    PubMed Central

    2011-01-01

    Background The effectiveness of knowledge-based word sense disambiguation (WSD) approaches depends in part on the information available in the reference knowledge resource. Off the shelf, these resources are not optimized for WSD and might lack terms to model the context properly. In addition, they might include noisy terms which contribute to false positives in the disambiguation results. Methods We analyzed some collocation types which could improve the performance of knowledge-based disambiguation methods. Collocations are obtained by extracting candidate collocations from MEDLINE and then assigning them to one of the senses of an ambiguous word. We performed this assignment either using semantic group profiles or a knowledge-based disambiguation method. In addition to collocations, we used second-order features from a previously implemented approach. Specifically, we measured the effect of these collocations in two knowledge-based WSD methods. The first method, AEC, uses the knowledge from the UMLS to collect examples from MEDLINE which are used to train a Naïve Bayes approach. The second method, MRD, builds a profile for each candidate sense based on the UMLS and compares the profile to the context of the ambiguous word. We have used two WSD test sets which contain disambiguation cases which are mapped to UMLS concepts. The first one, the NLM WSD set, was developed manually by several domain experts and contains words with high frequency occurrence in MEDLINE. The second one, the MSH WSD set, was developed automatically using the MeSH indexing in MEDLINE. It contains a larger set of words and covers a larger number of UMLS semantic types. Results The results indicate an improvement after the use of collocations, although the approaches have different performance depending on the data set. In the NLM WSD set, the improvement is larger for the MRD disambiguation method using second-order features. Assignment of collocations to a candidate sense based on UMLS semantic group profiles is more effective in the AEC method. In the MSH WSD set, the increment in performance is modest for all the methods. Collocations combined with the MRD disambiguation method have the best performance. The MRD disambiguation method and second-order features provide an insignificant change in performance. The AEC disambiguation method gives a modest improvement in performance. Assignment of collocations to a candidate sense based on knowledge-based methods has better performance. Conclusions Collocations improve the performance of knowledge-based disambiguation methods, although results vary depending on the test set and method used. Generally, the AEC method is sensitive to query drift. Using AEC, just a few selected terms provide a large improvement in disambiguation performance. The MRD method handles noisy terms better but requires a larger set of terms to improve performance. PMID:21658291

  20. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  1. Evaluation of assumptions in soil moisture triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  2. Spline screw multiple rotations mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling two bodies together and for transmitting torque from one body to another with mechanical timing and sequencing is reported. The mechanical timing and sequencing is handled so that the following criteria are met: (1) the bodies are handled in a safe manner and nothing floats loose in space, (2) electrical connectors are engaged as long as possible so that the internal processes can be monitored throughout by sensors, and (3) electrical and mechanical power and signals are coupled. The first body has a splined driver for providing the input torque. The second body has a threaded drive member capable of rotation and limited translation. The embedded drive member will mate with and fasten to the splined driver. The second body has an embedded bevel gear member capable of rotation and limited translation. This bevel gear member is coaxial with the threaded drive member. A compression spring provides a preload on the rotating threaded member, and a thrust bearing is used for limiting the translation of the bevel gear member so that when the bevel gear member reaches the upward limit of its translation the two bodies are fully coupled and the bevel gear member then rotates due to the input torque transmitted from the splined driver through the threaded drive member to the bevel gear member. An output bevel gear with an attached output drive shaft is embedded in the second body and meshes with the threaded rotating bevel gear member to transmit the input torque to the output drive shaft.

  3. Spline screw multiple rotations mechanism

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1993-12-01

    A system for coupling two bodies together and for transmitting torque from one body to another with mechanical timing and sequencing is reported. The mechanical timing and sequencing is handled so that the following criteria are met: (1) the bodies are handled in a safe manner and nothing floats loose in space, (2) electrical connectors are engaged as long as possible so that the internal processes can be monitored throughout by sensors, and (3) electrical and mechanical power and signals are coupled. The first body has a splined driver for providing the input torque. The second body has a threaded drive member capable of rotation and limited translation. The embedded drive member will mate with and fasten to the splined driver. The second body has an embedded bevel gear member capable of rotation and limited translation. This bevel gear member is coaxial with the threaded drive member. A compression spring provides a preload on the rotating threaded member, and a thrust bearing is used for limiting the translation of the bevel gear member so that when the bevel gear member reaches the upward limit of its translation the two bodies are fully coupled and the bevel gear member then rotates due to the input torque transmitted from the splined driver through the threaded drive member to the bevel gear member. An output bevel gear with an attached output drive shaft is embedded in the second body and meshes with the threaded rotating bevel gear member to transmit the input torque to the output drive shaft.

  4. A cubic spline approximation for problems in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  5. Reforming Triple Collocation: Beyond Three Estimates and Separation of Structural/Non-structural Errors

    NASA Astrophysics Data System (ADS)

    Pan, M.; Zhan, W.; Fisher, C. K.; Crow, W. T.; Wood, E. F.

    2014-12-01

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the multiple collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is slightly different from the original inner product solution but easier to extend to multiple collocation cases. The Pythagorean solution is fully equivalent to the original inner product solution for the triple collocation case. The multiple collocation turns out to be an over-constrained problem and a least squared solution is presented. As the most critical assumption of uncorrelated errors will almost for sure fail in multiple collocation problems, we propose to divide the source estimates into structural categories and treat the structural and non-structural errors separately. Such error separation allows the source estimates to have their structural errors fully correlated within the same structural category, which is much more realistic than the original assumption. A new error assessment procedure is developed which performs the collocation twice, each for one type of errors, and then sums up the two types of errors. The new procedure is also fully backward compatible with the original triple collocation. Error assessment experiments are carried out for surface soil moisture data from multiple remote sensing models, land surface models, and in situ measurements.

  6. Investigating ESL Learners' Lexical Collocations: The Acquisition of Verb + Noun Collocations by Japanese Learners of English

    ERIC Educational Resources Information Center

    Miyakoshi, Tomoko

    2009-01-01

    Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an…

  7. A Localized Tau Method PDE Solver

    NASA Technical Reports Server (NTRS)

    Cottam, Russell

    2002-01-01

    In this paper we present a new form of the collocation method that allows one to find very accurate solutions to time marching problems without the unwelcome appearance of Gibb's phenomenon oscillations. The basic method is applicable to any partial differential equation whose solution is a continuous, albeit possibly rapidly varying function. Discontinuous functions are dealt with by replacing the function in a small neighborhood of the discontinuity with a spline that smoothly connects the function segments on either side of the discontinuity. This will be demonstrated when the solution to the inviscid Burgers equation is discussed.

  8. Stochastic dynamic models and Chebyshev splines

    PubMed Central

    Fan, Ruzong; Zhu, Bin; Wang, Yuedong

    2015-01-01

    In this article, we establish a connection between a stochastic dynamic model (SDM) driven by a linear stochastic differential equation (SDE) and a Chebyshev spline, which enables researchers to borrow strength across fields both theoretically and numerically. We construct a differential operator for the penalty function and develop a reproducing kernel Hilbert space (RKHS) induced by the SDM and the Chebyshev spline. The general form of the linear SDE allows us to extend the well-known connection between an integrated Brownian motion and a polynomial spline to a connection between more complex diffusion processes and Chebyshev splines. One interesting special case is connection between an integrated Ornstein–Uhlenbeck process and an exponential spline. We use two real data sets to illustrate the integrated Ornstein–Uhlenbeck process model and exponential spline model and show their estimates are almost identical. PMID:26045632

  9. Covariance modeling in geodetic applications of collocation

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko

    2014-05-01

    Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.

  10. A simplified package for calculating with splines

    NASA Technical Reports Server (NTRS)

    Smith, P. W.

    1974-01-01

    This package is designed to solve some of the elementary problems of spline interpretation and least squares fit. The subroutines fall into three basic categories. The first category involves computations with a given spline and/or knot sequence; the second category involves routines which calculate the coefficients of splines which perform a certain task (such as interpolation or least squares fit); and the last category is a banded equation solver for specific linear equations.

  11. Supporting Collocation Learning with a Digital Library

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on

  12. Supporting Collocation Learning with a Digital Library

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…

  13. Collocation and Technicality in EAP Engineering

    ERIC Educational Resources Information Center

    Ward, Jeremy

    2007-01-01

    This article explores how collocation relates to lexical technicality, and how the relationship can be exploited for teaching EAP to second-year engineering students. First, corpus data are presented to show that complex noun phrase formation is a ubiquitous feature of engineering text, and that these phrases (or collocations) are highly…

  14. Radiation heat transfer model using Monte Carlo ray tracing method on hierarchical ortho-Cartesian meshes and non-uniform rational basis spline surfaces for description of boundaries

    NASA Astrophysics Data System (ADS)

    Kuczyński, Paweł; Białecki, Ryszard

    2014-06-01

    The paper deals with a solution of radiation heat transfer problems in enclosures filled with nonparticipating medium using ray tracing on hierarchical ortho-Cartesian meshes. The idea behind the approach is that radiative heat transfer problems can be solved on much coarser grids than their counterparts from computational fluid dynamics (CFD). The resulting code is designed as an add-on to OpenFOAM, an open-source CFD program. Ortho-Cartesian mesh involving boundary elements is created based upon CFD mesh. Parametric non-uniform rational basis spline (NURBS) surfaces are used to define boundaries of the enclosure, allowing for dealing with domains of complex shapes. Algorithm for determining random, uniformly distributed locations of rays leaving NURBS surfaces is described. The paper presents results of test cases assuming gray diffusive walls. In the current version of the model the radiation is not absorbed within gases. However, the ultimate aim of the work is to upgrade the functionality of the model, to problems in absorbing, emitting and scattering medium projecting iteratively the results of radiative analysis on CFD mesh and CFD solution on radiative mesh.

  15. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  16. Beamforming with collocated microphone arrays

    NASA Astrophysics Data System (ADS)

    Lockwood, Michael E.; Jones, Douglas L.; Su, Quang; Miles, Ronald N.

    2003-10-01

    A collocated microphone array, including three gradient microphones with different orientations and one omnidirectional microphone, was used to acquire data in a sound-treated room and in an outdoor environment. This arrangement of gradient microphones represents an acoustic vector sensor used in air. Beamforming techniques traditionally associated with much larger uniformly spaced arrays of omnidirectional sensors are extended to this compact array (1 cm3) with encouraging results. A frequency-domain minimum-variance beamformer was developed to work with this array. After a calibration of the array, the recovery of sources from any direction is achieved with high fidelity, even in the presence of multiple interferers. SNR gains of 5-12 dB with up to four speech sources were obtained with both indoor and outdoor recordings. This algorithm has been developed for new MEMS-type microphones that further reduce the size of the sensor array.

  17. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    SciTech Connect

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.; Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.

    2008-07-14

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensity that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.

  18. Validation of significant wave height product from Envisat ASAR using triple collocation

    NASA Astrophysics Data System (ADS)

    Wang, H.; Shi, C. Y.; Zhu, J. H.; Huang, X. Q.; Chen, C. T.

    2014-03-01

    Nowadays, spaceborne Synthetic Aperture Radar (SAR) has become a powerful tool for providing significant wave height. Traditionally, validation of SAR derived ocean wave height has been carried out against buoy measurements or model outputs, which only yield a inter-comparison, but not an 'absolute' validation. In this study, the triple collocation error model has been introduced in the validation of Envisat ASAR level 2 data. Significant wave height data from ASAR were validated against in situ buoy data, and wave model hindcast results from WaveWatch III, covering a period of six years. The impact of the collocation distance on the error of ASAR wave height was discussed. From the triple collocation validation analysis, it is found that the error of Envisat ASAR significant wave height product is linear to the collocation distance, and decrease with the decreasing collocation distance. Using the linear regression fit method, the absolute error of Envisat ASAR wave height was obtained with zero collocation distance. The absolute Envisat ASAR wave height error of 0.49m is presented in deep and open ocean from this triple collocation validation work.

  19. Radial spline assembly for antifriction bearings

    NASA Technical Reports Server (NTRS)

    Moore, Jerry H. (Inventor)

    1993-01-01

    An outer race carrier is constructed for receiving an outer race of an antifriction bearing assembly. The carrier in turn is slidably fitted in an opening of a support wall to accommodate slight axial movements of a shaft. A plurality of longitudinal splines on the carrier are disposed to be fitted into matching slots in the opening. A deadband gap is provided between sides of the splines and slots, with a radial gap at ends of the splines and slots and a gap between the splines and slots sized larger than the deadband gap. With this construction, operational distortions (slope) of the support wall are accommodated by the larger radial gaps while the deadband gaps maintain a relatively high springrate of the housing. Additionally, side loads applied to the shaft are distributed between sides of the splines and slots, distributing such loads over a larger surface area than a race carrier of the prior art.

  20. Modelling Childhood Growth Using Fractional Polynomials and Linear Splines

    PubMed Central

    Tilling, Kate; Macdonald-Wallis, Corrie; Lawlor, Debbie A.; Hughes, Rachael A.; Howe, Laura D.

    2014-01-01

    Background There is increasing emphasis in medical research on modelling growth across the life course and identifying factors associated with growth. Here, we demonstrate multilevel models for childhood growth either as a smooth function (using fractional polynomials) or a set of connected linear phases (using linear splines). Methods We related parental social class to height from birth to 10 years of age in 5,588 girls from the Avon Longitudinal Study of Parents and Children (ALSPAC). Multilevel fractional polynomial modelling identified the best-fitting model as being of degree 2 with powers of the square root of age, and the square root of age multiplied by the log of age. The multilevel linear spline model identified knot points at 3, 12 and 36 months of age. Results Both the fractional polynomial and linear spline models show an initially fast rate of growth, which slowed over time. Both models also showed that there was a disparity in length between manual and non-manual social class infants at birth, which decreased in magnitude until approximately 1 year of age and then increased. Conclusions Multilevel fractional polynomials give a more realistic smooth function, and linear spline models are easily interpretable. Each can be used to summarise individual growth trajectories and their relationships with individual-level exposures. PMID:25413651

  1. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  2. An adaptive three-dimensional RHT-splines formulation in linear elasto-statics and elasto-dynamics

    NASA Astrophysics Data System (ADS)

    Nguyen-Thanh, N.; Muthu, J.; Zhuang, X.; Rabczuk, T.

    2014-02-01

    An adaptive three-dimensional isogeometric formulation based on rational splines over hierarchical T-meshes (RHT-splines) for problems in elasto-statics and elasto-dynamics is presented. RHT-splines avoid some short-comings of NURBS-based formulations; in particular they allow for adaptive h-refinement with ease. In order to drive the adaptive refinement, we present a recovery-based error estimator for RHT-splines. The method is applied to several problems in elasto-statics and elasto-dynamics including three-dimensional modeling of thin structures. The results are compared to analytical solutions and results of NURBS based isogeometric formulations.

  3. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  4. The Effect of Input Enhancement of Collocations in Reading on Collocation Learning and Retention of EFL Learners

    ERIC Educational Resources Information Center

    Goudarzi, Zahra; Moini, M. Raouf

    2012-01-01

    Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…

  5. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  6. Calculating the 2D motion of lumbar vertebrae using splines.

    PubMed

    McCane, Brendan; King, Tamara I; Abbott, J Haxby

    2006-01-01

    In this study we investigate the use of splines and the ICP method [Besl, P., McKay, N., 1992. A method for registration of 3d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 239-256.] for calculating the transformation parameters for a rigid body undergoing planar motion parallel to the image plane. We demonstrate the efficacy of the method by estimating the finite centre of rotation and angle of rotation from lateral flexion/extension radiographs of the lumbar spine. In an in vitro error study, the method displayed an average error of rotation of 0.44 +/- 0.45 degrees, and an average error in FCR calculation of 7.6 +/- 8.5 mm. The method was shown to be superior to that of Crisco et al. [Two-dimensional rigid-body kinematics using image contour registration. Journal of Biomechanics 28(1), 119-124.] and Brinckmann et al. [Quantification of overload injuries of the thoracolumbar spine in persons exposed to heavy physical exertions or vibration at the workplace: Part i - the shape of vertebrae and intervertebral discs - study of a yound, healthy population and a middle-aged control group. Clinical Biomechanics Supplement 1, S5-S83.] for the tests performed here. In general, we believe the use of splines to represent planar shapes to be superior to using digitised curves or landmarks for several reasons. First, with appropriate software, splines require less effort to define and are a compact representation, with most vertebra outlines using less than 30 control points. Second, splines are inherently sub-pixel representations of curves, even if the control points are limited to pixel resolutions. Third, there is a well-defined method (the ICP algorithm) for registering shapes represented as splines. Finally, like digitised curves, splines are able to represent a large class of shapes with little effort, but reduce potential segmentation errors from two dimensions (parallel and perpendicular to the image gradient) to just one (parallel to the image gradient). We have developed an application for performing all the necessary computations which can be downloaded from http://www.claritysmart.com. PMID:16325826

  7. An empirical understanding of triple collocation evaluation measure

    NASA Astrophysics Data System (ADS)

    Scipal, Klaus; Doubkova, Marcela; Hegyova, Alena; Dorigo, Wouter; Wagner, Wolfgang

    2013-04-01

    Triple collocation method is an advanced evaluation method that has been used in the soil moisture field for only about half a decade. The method requires three datasets with an independent error structure that represent an identical phenomenon. The main advantages of the method are that it a) doesn't require a reference dataset that has to be considered to represent the truth, b) limits the effect of random and systematic errors of other two datasets, and c) simultaneously assesses the error of three datasets. The objective of this presentation is to assess the triple collocation error (Tc) of the ASAR Global Mode Surface Soil Moisture (GM SSM 1) km dataset and highlight problems of the method related to its ability to cancel the effect of error of ancillary datasets. In particular, the goal is to a) investigate trends in Tc related to the change in spatial resolution from 5 to 25 km, b) to investigate trends in Tc related to the choice of a hydrological model, and c) to study the relationship between Tc and other absolute evaluation methods (namely RMSE and Error Propagation EP). The triple collocation method is implemented using ASAR GM, AMSR-E, and a model (either AWRA-L, GLDAS-NOAH, or ERA-Interim). First, the significance of the relationship between the three soil moisture datasets was tested that is a prerequisite for the triple collocation method. Second, the trends in Tc related to the choice of the third reference dataset and scale were assessed. For this purpose the triple collocation is repeated replacing AWRA-L with two different globally available model reanalysis dataset operating at different spatial resolution (ERA-Interim and GLDAS-NOAH). Finally, the retrieved results were compared to the results of the RMSE and EP evaluation measures. Our results demonstrate that the Tc method does not eliminate the random and time-variant systematic errors of the second and the third dataset used in the Tc. The possible reasons include the fact a) that the TC method could not fully function with datasets acting at very different spatial resolutions, or b) that the errors were not fully independent as initially assumed.

  8. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  9. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V., II

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  10. The Impact of Corpus-Based Collocation Instruction on Iranian EFL Learners' Collocation Learning

    ERIC Educational Resources Information Center

    Ashouri, Shabnam; Arjmandi, Masoume; Rahimi, Ramin

    2014-01-01

    Over the past decades, studies of EFL/ESL vocabulary acquisition have identified the significance of collocations in language learning. Due to the fact that collocations have been regarded as one of the major concerns of both EFL teachers and learners for many years, the present study attempts to shed light on the impact of corpus-based…

  11. Frequency of Input and L2 Collocational Processing: A Comparison of Congruent and Incongruent Collocations

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2013-01-01

    This study investigated the influence of frequency effects on the processing of congruent (i.e., having an equivalent first language [L1] construction) collocations and incongruent (i.e., not having an equivalent L1 construction) collocations in a second language (L2). An acceptability judgment task was administered to native and advanced…

  12. Data approximation using a blending type spline construction

    SciTech Connect

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.

  13. An alternative local collocation strategy for high-convergence meshless PDE solutions, using radial basis functions

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Power, H.; Meng, C. Y.; Howard, D.; Cliffe, K. A.

    2013-12-01

    This work proposes an alternative decomposition for local scalable meshless RBF collocation. The proposed method operates on a dataset of scattered nodes that are placed within the solution domain and on the solution boundary, forming a small RBF collocation system around each internal node. Unlike other meshless local RBF formulations that are based on a generalised finite difference (RBF-FD) principle, in the proposed "finite collocation" method the solution of the PDE is driven entirely by collocation of PDE governing and boundary operators within the local systems. A sparse global collocation system is obtained not by enforcing the PDE governing operator, but by assembling the value of the field variable in terms of the field value at neighbouring nodes. In analogy to full-domain RBF collocation systems, communication between stencils occurs only over the stencil periphery, allowing the PDE governing operator to be collocated in an uninterrupted manner within the stencil interior. The local collocation of the PDE governing operator allows the method to operate on centred stencils in the presence of strong convective fields; the reconstruction weights assigned to nodes in the stencils being automatically adjusted to represent the flow of information as dictated by the problem physics. This "implicit upwinding" effect mitigates the need for ad-hoc upwinding stencils in convective dominant problems. Boundary conditions are also enforced within the local collocation systems, allowing arbitrary boundary operators to be imposed naturally within the solution construction. The performance of the method is assessed using a large number of numerical examples with two steady PDEs; the convection-diffusion equation, and the Lamé-Navier equations for linear elasticity. The method exhibits high-order convergence in each case tested (greater than sixth order), and the use of centred stencils is demonstrated for convective-dominant problems. In the case of linear elasticity, the stress fields are reproduced to the same degree of accuracy as the displacement field, and exhibit the same order of convergence. The method is also highly stable towards variations in basis function flatness, demonstrating significantly improved stability in comparison to finite-difference type RBF collocation methods.

  14. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  15. An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations

    SciTech Connect

    Ma Xiang; Zabaras, Nicholas

    2009-05-01

    In recent years, there has been a growing interest in analyzing and quantifying the effects of random inputs in the solution of ordinary/partial differential equations. To this end, the spectral stochastic finite element method (SSFEM) is the most popular method due to its fast convergence rate. Recently, the stochastic sparse grid collocation method has emerged as an attractive alternative to SSFEM. It approximates the solution in the stochastic space using Lagrange polynomial interpolation. The collocation method requires only repetitive calls to an existing deterministic solver, similar to the Monte Carlo method. However, both the SSFEM and current sparse grid collocation methods utilize global polynomials in the stochastic space. Thus when there are steep gradients or finite discontinuities in the stochastic space, these methods converge very slowly or even fail to converge. In this paper, we develop an adaptive sparse grid collocation strategy using piecewise multi-linear hierarchical basis functions. Hierarchical surplus is used as an error indicator to automatically detect the discontinuity region in the stochastic space and adaptively refine the collocation points in this region. Numerical examples, especially for problems related to long-term integration and stochastic discontinuity, are presented. Comparisons with Monte Carlo and multi-element based random domain decomposition methods are also given to show the efficiency and accuracy of the proposed method.

  16. Heterogeneous modeling of medical image data using B-spline functions.

    PubMed

    Grove, Olya; Rajab, Khairan; Les Piegl, A

    2012-10-01

    Biomedical data visualization and modeling rely predominately on manual processing and utilization of voxel- and facet-based homogeneous models. Biological structures are naturally heterogeneous and it is important to incorporate properties, such as material composition, size and shape, into the modeling process. A method to approximate image density data with a continuous B-spline surface is presented. The proposed approach generates a density point cloud, based on medical image data to reproduce heterogeneity across the image, through point densities. The density point cloud is ordered and approximated with a set of B-spline curves. A B-spline surface is lofted through the cross-sectional B-spline curves preserving the heterogeneity of the point cloud dataset. Preliminary results indicate that the proposed methodology produces a mathematical representation capable of capturing and preserving density variations with high fidelity. PMID:23157075

  17. Analysis of harmonic spline gravity models for Venus and Mars

    NASA Technical Reports Server (NTRS)

    Bowin, Carl

    1986-01-01

    Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.

  18. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  19. Results of laser ranging collocations during 1983

    NASA Technical Reports Server (NTRS)

    Kolenkiewicz, R.

    1984-01-01

    The objective of laser ranging collocations is to compare the ability of two satellite laser ranging systems, located in the vicinity of one another, to measure the distance to an artificial Earth satellite in orbit over the sites. The similar measurement of this distance is essential before a new or modified laser system is deployed to worldwide locations in order to gather the data necessary to meet the scientific goals of the Crustal Dynamics Project. In order to be certain the laser systems are operating properly, they are periodically compared with each other. These comparisons or collocations are performed by locating the lasers side by side when they track the same satellite during the same time or pass. The data is then compared to make sure the lasers are giving essentially the same range results. Results of the three collocations performed during 1983 are given.

  20. Spline-based procedures for dose-finding studies with active control

    PubMed Central

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-01

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931

  1. White matter fiber tracking directed by interpolating splines and a methodological framework for evaluation.

    PubMed

    Losnegrd, Are; Lundervold, Arvid; Hodneland, Erlend

    2013-01-01

    Image-based tractography of white matter (WM) fiber bundles in the brain using diffusion weighted MRI (DW-MRI) has become a useful tool in basic and clinical neuroscience. However, proper tracking is challenging due to the anatomical complexity of fiber pathways, the coarse resolution of clinically applicable whole-brain in vivo imaging techniques, and the difficulties associated with verification. In this study we introduce a new tractography algorithm using splines (denoted Spline). Spline reconstructs smooth fiber trajectories iteratively, in contrast to most other tractography algorithms that create piecewise linear fiber tract segments, followed by spline fitting. Using DW-MRI recordings from eight healthy elderly people participating in a longitudinal study of cognitive aging, we compare our Spline algorithm to two state-of-the-art tracking methods from the TrackVis software suite. The comparison is done quantitatively using diffusion metrics (fractional anisotropy, FA), with both (1) tract averaging, (2) longitudinal linear mixed-effects model fitting, and (3) detailed along-tract analysis. Further validation is done on recordings from a diffusion hardware phantom, mimicking a coronal brain slice, with a known ground truth. Results from the longitudinal aging study showed high sensitivity of Spline tracking to individual aging patterns of mean FA when combined with linear mixed-effects modeling, moderately strong differences in the along-tract analysis of specific tracts, whereas the tract-averaged comparison using simple linear OLS regression revealed less differences between Spline and the two other tractography algorithms. In the brain phantom experiments with a ground truth, we demonstrated improved tracking ability of Spline compared to the two reference tractography algorithms being tested. PMID:23898264

  2. Corpus-Based versus Traditional Learning of Collocations

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2015-01-01

    One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…

  3. Gauging the Effects of Exercises on Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart

    2014-01-01

    Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials…

  4. Is "Absorb Knowledge" an Improper Collocation?

    ERIC Educational Resources Information Center

    Su, Yujie

    2010-01-01

    Collocation is practically very tough to Chinese English learners. The main reason lies in the fact that English and Chinese belong to two distinct language systems. And the deep reason is that learners tend to develop different metaphorical concept in accordance with distinct ways of thinking in Chinese. The paper, taking "absorb…

  5. A space-time collocation scheme for modified anomalous subdiffusion and nonlinear superdiffusion equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.

    2016-01-01

    This paper reports a new spectral collocation technique for solving time-space modified anomalous subdiffusion equation with a nonlinear source term subject to Dirichlet and Neumann boundary conditions. This model equation governs the evolution for the probability density function that describes anomalously diffusing particles. Anomalous diffusion is ubiquitous in physical and biological systems where trapping and binding of particles can occur. A space-time Jacobi collocation scheme is investigated for solving such problem. The main advantage of the proposed scheme is that, the shifted Jacobi Gauss-Lobatto collocation and shifted Jacobi Gauss-Radau collocation approximations are employed for spatial and temporal discretizations, respectively. Thereby, the problem is successfully reduced to a system of algebraic equations. The numerical results obtained by this algorithm have been compared with various numerical methods in order to demonstrate the high accuracy and efficiency of the proposed method. Indeed, for relatively limited number of Gauss-Lobatto and Gauss-Radau collocation nodes imposed, the absolute error in our numerical solutions is sufficiently small. The results have been compared with other techniques in order to demonstrate the high accuracy and efficiency of the proposed method.

  6. Spline-Screw Multiple-Rotation Mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Mechanism functions like combined robotic gripper and nut runner. Spline-screw multiple-rotation mechanism related to spline-screw payload-fastening system described in (GSC-13454). Incorporated as subsystem in alternative version of system. Mechanism functions like combination of robotic gripper and nut runner; provides both secure grip and rotary actuation of other parts of system. Used in system in which no need to make or break electrical connections to payload during robotic installation or removal of payload. More complicated version needed to make and break electrical connections. Mechanism mounted in payload.

  7. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  8. Restricted trivariate polycube splines for volumetric data modeling.

    PubMed

    Wang, Kexiang; Li, Xin; Li, Bo; Xu, Huanhuan; Qin, Hong

    2012-05-01

    This paper presents a volumetric modeling framework to construct a novel spline scheme called restricted trivariate polycube splines (RTP-splines). The RTP-spline aims to generalize both trivariate T-splines and tensor-product B-splines; it uses solid polycube structure as underlying parametric domains and strictly bounds blending functions within such domains. We construct volumetric RTP-splines in a top-down fashion in four steps: 1) Extending the polycube domain to its bounding volume via space filling; 2) building the B-spline volume over the extended domain with restricted boundaries; 3) inserting duplicate knots by adding anchor points and performing local refinement; and 4) removing exterior cells and anchors. Besides local refinement inherited from general T-splines, the RTP-splines have a few attractive properties as follows: 1) They naturally model solid objects with complicated topologies/bifurcations using a one-piece continuous representation without domain trimming/patching/merging. 2) They have guaranteed semistandardness so that the functions and derivatives evaluation is very efficient. 3) Their restricted support regions of blending functions prevent control points from influencing other nearby domain regions that stay opposite to the immediate boundaries. These features are highly desirable for certain applications such as isogeometric analysis. We conduct extensive experiments on converting complicated solid models into RTP-splines, and demonstrate the proposed spline to be a powerful and promising tool for volumetric modeling and other scientific/engineering applications where data sets with multiattributes are prevalent. PMID:22442125

  9. A multiresolution analysis for tensor-product splines using weighted spline wavelets

    NASA Astrophysics Data System (ADS)

    Kapl, Mario; Jüttler, Bert

    2009-09-01

    We construct biorthogonal spline wavelets for periodic splines which extend the notion of "lazy" wavelets for linear functions (where the wavelets are simply a subset of the scaling functions) to splines of higher degree. We then use the lifting scheme in order to improve the approximation properties with respect to a norm induced by a weighted inner product with a piecewise constant weight function. Using the lifted wavelets we define a multiresolution analysis of tensor-product spline functions and apply it to image compression of black-and-white images. By performing-as a model problem-image compression with black-and-white images, we demonstrate that the use of a weight function allows to adapt the norm to the specific problem.

  10. Shaft Coupler With Friction and Spline Clutches

    NASA Technical Reports Server (NTRS)

    Thebert, Glenn W.

    1987-01-01

    Coupling, developed for rotor of lift/cruise aircraft, employs two clutches for smooth transmission of power from gas-turbine engine to rotor. Prior to ascent, coupling applies friction-type transition clutch that accelerates rotor shaft to speeds matching those of engine shaft. Once shafts synchronized, spline coupling engaged and friction clutch released to provide positive mechanical drive.

  11. A Spline Regression Model for Latent Variables

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.

    2014-01-01

    Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…

  12. Spline smoothing of histograms by linear programming

    NASA Technical Reports Server (NTRS)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  13. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  14. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  15. Trigonometric quadratic B-spline subdomain Galerkin algorithm for the Burgers' equation

    NASA Astrophysics Data System (ADS)

    Ay, Buket; Dag, Idris; Gorgulu, Melis Zorsahin

    2015-12-01

    A variant of the subdomain Galerkin method has been set up to find numerical solutions of the Burgers' equation. Approximate function consists of the combination of the trigonometric B-splines. Integration of Burgers' equation has been achived by aid of the subdomain Galerkin method based on the trigonometric B-splines as an approximate functions. The resulting first order ordinary differential system has been converted into an iterative algebraic equation by use of the Crank-Nicolson method at successive two time levels. The suggested algorithm is tested on somewell-known problems for the Burgers' equation.

  16. Integral resonant control of collocated smart structures

    NASA Astrophysics Data System (ADS)

    Aphale, Sumeet S.; Fleming, Andrew J.; Moheimani, S. O. Reza

    2007-04-01

    This paper introduces integral resonant control, IRC, a simple, robust and well-performing technique for vibration control in smart structures with collocated sensors and actuators. By adding a direct feed-through to a collocated system, the transfer function can be modified from containing resonant poles followed by interlaced zeros, to zeros followed by interlaced poles. It is shown that this modification permits the direct application of integral feedback and results in good performance and stability margins. By slightly increasing the controller complexity from first to second order, low-frequency gain can be curtailed, alleviating problems due to unnecessarily high controller gain below the first mode. Experimental application to a piezoelectric laminate cantilever beam demonstrates up to 24 dB modal amplitude reduction over the first eight modes.

  17. Integral control of collocated smart structures

    NASA Astrophysics Data System (ADS)

    Aphale, Sumeet S.; Fleming, Andrew J.; Reza Moheimani, S. O.

    2007-04-01

    This paper introduces a simple and robust technique for vibration control in smart structures with collocated sensors and actuators. The technique is called Integral Resonant Control (IRC). We show that by adding a direct feed-through to a collocated system, the transfer function can be modified from containing resonant poles followed by interlaced zeros, to zeros followed by interlaced poles. This structure permits the direct application of integral feedback and results in good stability and damping performance. To alleviate the problems due to unnecessarily high controller gain below the first mode, a slightly complicated second-order controller is also discussed. A piezoelectric laminate cantilever beam used to test the proposed control scheme exhibits up to 24 dB modal damping over the first eight modes.

  18. Applications of the spline filter for areal filtration

    NASA Astrophysics Data System (ADS)

    Tong, Mingsi; Zhang, Hao; Ott, Daniel; Chu, Wei; Song, John

    2015-12-01

    This paper proposes a general use isotropic areal spline filter. This new areal spline filter can achieve isotropy by approximating the transmission characteristic of the Gaussian filter. It can also eliminate the effect of void areas using a weighting factor, and resolve end-effect issues by applying new boundary conditions, which replace the first order finite difference in the traditional spline formulation. These improvements make the spline filter widely applicable to 3D surfaces and extend the applications of the spline filter in areal filtration.

  19. The Effect of Taper Angle and Spline Geometry on the Initial Stability of Tapered, Splined Modular Titanium Stems.

    PubMed

    Pierson, Jeffery L; Small, Scott R; Rodriguez, Jose A; Kang, Michael N; Glassman, Andrew H

    2015-07-01

    Design parameters affecting initial mechanical stability of tapered, splined modular titanium stems (TSMTSs) are not well understood. Furthermore, there is considerable variability in contemporary designs. We asked if spline geometry and stem taper angle could be optimized in TSMTS to improve mechanical stability to resist axial subsidence and increase torsional stability. Initial stability was quantified with stems of varied taper angle and spline geometry implanted in a foam model replicating 2cm diaphyseal engagement. Increased taper angle and a broad spline geometry exhibited significantly greater axial stability (+21%-269%) than other design combinations. Neither taper angle nor spline geometry significantly altered initial torsional stability. PMID:25754255

  20. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  1. Sparse and Efficient Estimation for Partial Spline Models with Increasing Dimension

    PubMed Central

    Zhang, Hao Helen; Shang, Zuofeng

    2014-01-01

    We consider model selection and estimation for partial spline models and propose a new regularization method in the context of smoothing splines. The regularization method has a simple yet elegant form, consisting of roughness penalty on the nonparametric component and shrinkage penalty on the parametric components, which can achieve function smoothing and sparse estimation simultaneously. We establish the convergence rate and oracle properties of the estimator under weak regularity conditions. Remarkably, the estimated parametric components are sparse and efficient, and the nonparametric component can be estimated with the optimal rate. The procedure also has attractive computational properties. Using the representer theory of smoothing splines, we reformulate the objective function as a LASSO-type problem, enabling us to use the LARS algorithm to compute the solution path. We then extend the procedure to situations when the number of predictors increases with the sample size and investigate its asymptotic properties in that context. Finite-sample performance is illustrated by simulations. PMID:25620808

  2. MAP recovery of polynomial splines from compressive samples and its application to vehicular signals

    NASA Astrophysics Data System (ADS)

    Hirabayashi, Akira; Makido, Satoshi; Condat, Laurent

    2013-09-01

    We propose a stable reconstruction method for polynomial splines from compressive samples based on the maximum a posteriori (MAP) estimation. The polynomial splines are one of the most powerful tools for modeling signals in real applications. Since such signals are not band-limited, the classical sampling theorem cannot be applied to them. However, splines can be regarded as signals with finite rate of innovation and therefore be perfectly reconstructed from noiseless samples acquired at, approximately, the rate of innovation. In noisy case, the conventional approach exploits Cadzow denoising. Our approach based on the MAP estimation reconstructs the signals more stably than not only the conventional approach but also a maximum likelihood estimation. We show the effectiveness of the proposed method by applying it to compressive sampling of vehicular signals.

  3. A trans-dimensional polynomial-spline parameterization for gradient-based geoacoustic inversion.

    PubMed

    Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan

    2014-10-01

    This paper presents a polynomial spline-based parameterization for trans-dimensional geoacoustic inversion. The parameterization is demonstrated for both simulated and measured data and shown to be an effective method of representing sediment geoacoustic profiles dominated by gradients, as typically occur, for example, in muddy seabeds. Specifically, the spline parameterization is compared using the deviance information criterion (DIC) to the standard stack-of-homogeneous layers parameterization for the inversion of bottom-loss data measured at a muddy seabed experiment site on the Malta Plateau. The DIC is an information criterion that is well suited to trans-D Bayesian inversion and is introduced to geoacoustics in this paper. Inversion results for both parameterizations are in good agreement with measurements on a sediment core extracted at the site. However, the spline parameterization more accurately resolves the power-law like structure of the core density profile and provides smaller overall uncertainties in geoacoustic parameters. In addition, the spline parameterization is found to be more parsimonious, and hence preferred, according to the DIC. The trans-dimensional polynomial spline approach is general, and applicable to any inverse problem for gradient-based profiles. [Work supported by ONR.]. PMID:25324060

  4. Spline for blade grids design

    NASA Astrophysics Data System (ADS)

    Korshunov, Andrei; Shershnev, Vladimir; Korshunova, Ksenia

    2015-08-01

    Methods of designing blades grids of power machines, such as equal thickness shape built on middle-line arc, or methods based on target stress spreading were invented long time ago, well described and still in use. Science and technology has moved far from that time and laboriousness of experimental research, which were involving unique equipment, requires development of new robust and flexible methods of design, which will determine the optimal geometry of flow passage.This investigation provides simple and universal method of designing blades, which, in comparison to the currently used methods, requires significantly less input data but still provides accurate results. The described method is purely analytical for both concave and convex sides of the blade, and therefore lets to describe the curve behavior down the flow path at any point. Compared with the blade grid designs currently used in industry, geometric parameters of the designs constructed with this method show the maximum deviation below 0.4%.

  5. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  6. Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling

    NASA Astrophysics Data System (ADS)

    Wu, Xiaofang; Shi, Jianghong

    In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.

  7. Spline-Screw Payload-Fastening System

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Payload handed off securely between robot and vehicle or structure. Spline-screw payload-fastening system includes mating female and male connector mechanisms. Clockwise (or counter-clockwise) rotation of splined male driver on robotic end effector causes connection between robot and payload to tighten (or loosen) and simultaneously causes connection between payload and structure to loosen (or tighten). Includes mechanisms like those described in "Tool-Changing Mechanism for Robot" (GSC-13435) and "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430). Designed for use in outer space, also useful on Earth in applications needed for secure handling and secure mounting of equipment modules during storage, transport, and/or operation. Particularly useful in machine or robotic applications.

  8. Representing flexible endoscope shapes with hermite splines

    NASA Astrophysics Data System (ADS)

    Chen, Elvis C. S.; Fowler, Sharyle A.; Hookey, Lawrence C.; Ellis, Randy E.

    2010-02-01

    Navigation of a flexible endoscope is a challenging surgical task: the shape of the end effector of the endoscope, interacting with surrounding tissues, determine the surgical path along which the endoscope is pushed. We present a navigational system that visualized the shape of the flexible endoscope tube to assist gastrointestinal surgeons in performing Natural Orifice Translumenal Endoscopic Surgery (NOTES). The system used an electromagnetic positional tracker, a catheter embedded with multiple electromagnetic sensors, and graphical user interface for visualization. Hermite splines were used to interpret the position and direction outputs of the endoscope sensors. We conducted NOTES experiments on live swine involving 6 gastrointestinal and 6 general surgeons. Participants who used the device first were 14.2% faster than when not using the device. Participants who used the device second were 33.6% faster than the first session. The trend suggests that spline-based visualization is a promising adjunct during NOTES procedures.

  9. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  10. Modeling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines

    NASA Astrophysics Data System (ADS)

    Kammann, P.

    2005-12-01

    Our intention is the modeling of seismic wave propagation from displacement measurements by seismographs at the Earth's surface. The elastic behaviour of the Earth is usually described by the Cauchy-Navier equation. A system of fundamental solutions for the Fourier transformed Cauchy-Navier equation are the Hansen vectors L, M and N. We apply an inverse Fourier transform to obtain an orthonormal function system depending on time and space. By means of this system we construct certain splines, which are then used for interpolating the given data. Compared to polynomial interpolation, splines have the advantage that they minimize some curvature measure and are, therefore, smoother. First, we test this method on a synthetic wave function. Afterwards, we apply it to realistic earthquake data. (P. Kammann, Modelling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines, Diploma Thesis, Geomathematics Group, Department of Mathematics, University of Kaiserslautern, 2005)

  11. B-spline design of digital FIR filter using evolutionary computation techniques

    NASA Astrophysics Data System (ADS)

    Swain, Manorama; Panda, Rutuparna

    2011-10-01

    In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.

  12. Incorporating Corpus Technology to Facilitate Learning of English Collocations in a Thai University EFL Writing Course

    ERIC Educational Resources Information Center

    Chatpunnarangsee, Kwanjira

    2013-01-01

    The purpose of this study is to explore ways of incorporating web-based concordancers for the purpose of teaching English collocations. A mixed-methods design utilizing a case study strategy was employed to uncover four specific dimensions of corpus use by twenty-four students in two classroom sections of a writing course at a university in…

  13. B-splines in variational atomic structure theory

    NASA Astrophysics Data System (ADS)

    Froese Fischer, Charlotte

    2007-06-01

    Many of the problems associated with the use of finite differences for the solution of variational Hartree-Fock or Dirac-Hartree-Fock equations are related to the orthogonality requirement and the need for node counting to control the computed solution of a two-point boundary value problem with many solutions. By expanding radial functions in a B-spline basis, the differential equations can be replaced by non-linear systems of equations of eigenvalue type. Hartree-Fock orbitals become solutions of generalized eigenvalue problems where orthogonality requirements can be dealt with through projection operators applied to the matrix that preserve the symmetry of the matrix. When expressed as banded systems of equations, all orbitals may be improved simultaneously using singular value decomposition or the Newton-Raphson method for faster convergence. Computational procedures will be outlined for non-relativistic multiconfiguration Hartree-Fock variational methods and extensions to the calculation of Rydberg series. It will also be shown how tensor products of B-splines can be applied to the calculation of two-electron pair-correlation functions where high-order partial waves improve the short-range electron-electron cusp condition at r1= r2.

  14. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  15. On the spline-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.

  16. Development and flight tests of vortex-attenuating splines

    NASA Technical Reports Server (NTRS)

    Hastings, E. C., Jr.; Patterson, J. C., Jr.; Shanks, R. E.; Champine, R. A.; Copeland, W. L.; Young, D. C.

    1975-01-01

    The ground tests and full-scale flight tests conducted during development of the vortex-attenuating spline are described. The flight tests were conducted using a vortex generating aircraft with and without splines; a second aircraft was used to probe the vortices generated in both cases. The results showed that splines significantly reduced the vortex effects, but resulted in some noise and climb performance penalties on the generating aircraft.

  17. Inversion of the Poisson transform using proportionally spaced cubic B-splines

    NASA Astrophysics Data System (ADS)

    Earnshaw, J. C.; Haughey, D.

    1996-12-01

    A simple but effective approach to inversion of the Poisson transform is described. The use of cubic splines to approximate the inverse function automatically provides a degree of regularization which helps to overcome the ill-conditioned nature of the problem. Representative tests of the method are presented.

  18. Adaptive collocation with application in height system transformation

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Zeng, A.; Zhang, J.

    2009-05-01

    In collocation applications, the prior covariance matrices or weight matrices between the signals and the observations should be consistent to their uncertainties; otherwise, the solution of collocation will be distorted. To balance the covariance matrices of the signals and the observations, a new adaptive collocation estimator is thus derived in which the corresponding adaptive factor is constructed by the ratio of the variance components of the signals and the observations. A maximum likelihood estimator of the variance components is thus derived based on the collocation functional model and stochastic model. A simplified Helmert type estimator of the variance components for the collocation is also introduced and compared to the derived maximum likelihood type estimator. Reasonable and consistent covariance matrices of the signals and the observations are arrived through the adjustment of the adaptive factor. The new adaptive collocation with related adaptive factor constructed by the derived variance components is applied in a transformation between the geodetic height derived by GPS and orthometric height. It is shown that the adaptive collocation is not only simple in calculation but also effective in balancing the contribution of observations and the signals in the collocation model.

  19. The Repetition of Collocations in EFL Textbooks: A Corpus Study

    ERIC Educational Resources Information Center

    Wang, Jui-hsin Teresa; Good, Robert L.

    2007-01-01

    The importance of repetition in the acquisition of lexical items has been widely acknowledged in single-word vocabulary research but has been relatively neglected in collocation studies. Since collocations are considered one key to achieving language fluency, and because learners spend a great amount of time interacting with their textbooks, the…

  20. Profiling the Collocation Use in ELT Textbooks and Learner Writing

    ERIC Educational Resources Information Center

    Tsai, Kuei-Ju

    2015-01-01

    The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…

  1. Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks

    ERIC Educational Resources Information Center

    Menon, Sujatha; Mukundan, Jayakaran

    2012-01-01

    This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…

  2. The Effect of Grouping and Presenting Collocations on Retention

    ERIC Educational Resources Information Center

    Akpinar, Kadriye Dilek; Bardakçi, Mehmet

    2015-01-01

    The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types…

  3. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    SciTech Connect

    Ruberti, M.; Averbukh, V.; Decleva, P.

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.

  4. Direct optimization method for reentry trajectory design

    NASA Astrophysics Data System (ADS)

    Jallade, S.; Huber, P.; Potti, J.; Dutruel-Lecohier, G.

    The software package called `Reentry and Atmospheric Transfer Trajectory' (RATT) was developed under ESA contract for the design of atmospheric trajectories. It includes four software TOP (Trajectory OPtimization) programs, which optimize reentry and aeroassisted transfer trajectories. 6FD and 3FD (6 and 3 degrees of freedom Flight Dynamic) are devoted to the simulation of the trajectory. SCA (Sensitivity and Covariance Analysis) performs covariance analysis on a given trajectory with respect to different uncertainties and error sources. TOP provides the optimum guidance law of a three degree of freedom reentry of aeroassisted transfer (AAOT) trajectories. Deorbit and reorbit impulses (if necessary) can be taken into account in the optimization. A wide choice of cost function is available to the user such as the integrated heat flux, or the sum of the velocity impulses, or a linear combination of both of them for trajectory and vehicle design. The crossrange and the downrange can be maximized during reentry trajectory. Path constraints are available on the load factor, the heat flux and the dynamic pressure. Results on these proposed options are presented. TOPPHY is the part of the TOP software corresponding to the definition and the computation of the optimization problemphysics. TOPPHY can interface with several optimizes with dynamic solvers: TOPOP and TROPIC using direct collocation methods and PROMIS using direct multiple shooting method. TOPOP was developed in the frame of this contract, it uses Hermite polynomials for the collocation method and the NPSOL optimizer from the NAG library. Both TROPIC and PROMIS were developed by the DLR (Deutsche Forschungsanstalt fuer Luft und Raumfahrt) and use the SLSQP optimizer. For the dynamic equation resolution, TROPIC uses a collocation method with Splines and PROMIS uses a multiple shooting method with finite differences. The three different optimizers including dynamics were tested on the reentry trajectory of the ACRV (Assured crew return vehicle), the emergency capsule of the Space Station Freedom. Conclusions are drawn from the different optimization method comparisons. The collocation method (TOPOP and TROPIC) is robust and preferable for obtaining an initial guess of the optimum trajectory but not accurate enough. A refinement of the optimum trajectory is performed with PROMIS which is more accurate for the dynamic equation resolution. collocation method and the multiple shooting methods were found to be very complementary.

  5. Polynomial Spline Estimation for A Generalized Additive Coefficient Model

    PubMed Central

    Xue, Lan; Liang, Hua

    2010-01-01

    We study a semiparametric generalized additive coefficient model, in which linear predictors in the conventional generalized linear models is generalized to unknown functions depending on certain covariates, and approximate the nonparametric functions by using polynomial spline. The asymptotic expansion with optimal rates of convergence for the estimators of the nonparametric part is established. Semiparametric generalized likelihood ratio test is also proposed to check if a nonparametric coefficient can be simplified as a parametric one. A conditional bootstrap version is suggested to approximate the distribution of the test under the null hypothesis. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed methods. We further apply the proposed model and methods to a data set from a human visceral Leishmaniasis (HVL) study conduced in Brazil from 1994 to 1997. Numerical results outperform the traditional generalized linear model and the proposed generalized additive coefficient model is preferable. PMID:20216928

  6. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    NASA Astrophysics Data System (ADS)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and motors to produce combined arc and straight-line motion. The simulation and experiment show interesting results by demonstrating smooth motion in both acceleration and jerk and significant improvements of positioning accuracy in trajectory planning.

  7. A direct collocation meshless approach with upwind scheme for radiative transfer in strongly inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Luo, Kang; Cao, Zhi-Hong; Yi, Hong-Liang; Tan, He-Ping

    2014-03-01

    A direct collocation meshless (DCM) method with upwind scheme is employed for solving the radiative transfer equation (RTE) for strongly inhomogeneous media. The trial function is constructed by a moving least-squares (MLS) approximation. The upwind scheme is implemented by moving the support domain of MLS approximation to the opposite direction of each streamline. To test computational accuracy and efficiency of the upwind direct collocation meshless (named UPCM) method, various problems in 1-D and 2-D geometries are analyzed. For the comparison, we also present cases of both the DCM method for the first-order RTE (employed by Tan et al. [1]) and the DCM for the MSORTE (a new second-order form of radiative transfer equation proposed by Zhao et al. [2]). The results show that the proposed method is more accurate and stable than the DCM method (no upwinding) based on both the RTE and MSORTE. Computationally, it is also faster.

  8. Analysis of the boundary conditions of the spline filter

    NASA Astrophysics Data System (ADS)

    Tong, Mingsi; Zhang, Hao; Ott, Daniel; Zhao, Xuezeng; Song, John

    2015-09-01

    The spline filter is a standard linear profile filter recommended by ISO/TS 16610-22 (2006). The main advantage of the spline filter is that no end-effects occur as a result of the filter. The ISO standard also provides the tension parameter β =0.625 24 to make the transmission characteristic of the spline filter approximately similar to the Gaussian filter. However, when the tension parameter β is not zero, end-effects appear. To resolve this problem, we analyze 14 different combinations of boundary conditions of the spline filter and propose a set of new boundary conditions in this paper. The new boundary conditions can provide satisfactory end portions of the output form without end-effects for the spline filter while still maintaining the value of β =0.625 24 .

  9. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    SciTech Connect

    Yankov, A.; Downar, T.

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  10. A mixed basis density functional approach for low dimensional systems with B-splines

    NASA Astrophysics Data System (ADS)

    Ren, Chung-Yuan; Hsue, Chen-Shiung; Chang, Yia-Chung

    2015-03-01

    A mixed basis approach based on density functional theory is employed for low dimensional systems. The basis functions are taken to be plane waves for the periodic direction multiplied by B-spline polynomials in the non-periodic direction. B-splines have the following advantages: (1) the associated matrix elements are sparse, (2) B-splines possess a superior treatment of derivatives, (3) B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With this mixed basis set we can directly calculate the total energy of the system instead of using the conventional supercell model with a slab sandwiched between vacuum regions. A generalized Lanczos-Krylov iterative method is implemented for the diagonalization of the Hamiltonian matrix. To demonstrate the present approach, we apply it to study the C(001)-(2×1) surface with the norm-conserving pseudopotential, the n-type δ-doped graphene, and graphene nanoribbon with Vanderbilt's ultra-soft pseudopotentials. All the resulting electronic structures were found to be in good agreement with those obtained by the VASP code, but with a reduced number of basis.

  11. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  12. Metal flowing of involute spline cold roll-beating forming

    NASA Astrophysics Data System (ADS)

    Cui, Fengkui; Wang, Xiaoqiang; Zhang, Fengshou; Xu, Hongyu; Quan, Jianhui; Li, Yan

    2013-09-01

    The present research on involute spline cold roll-beating forming is mainly about the principles and motion relations of cold roll-beating, the theory of roller design, and the stress and strain field analysis of cold roll-beating, etc. However, the research on law of metal flow in the forming process of involute spline cold roll-beating is rare. According to the principle of involute spline cold roll-beating, the contact model between the rollers and the spline shaft blank in the process of cold roll-beating forming is established, and the theoretical analysis of metal flow in the cold roll-beating deforming region is proceeded. A finite element model of the spline cold roll-beating process is established, the formation mechanism of the involute spline tooth profile in cold roll-beating forming process is studied, and the node flow tracks of the deformation area are analyzed. The experimental research on the metal flow of cold roll-beating spline is conducted, and the metallographic structure variation, grain characteristics and metal flow line of the different tooth profile area are analyzed. The experimental results show that the particle flow directions of the deformable bodies in cold roll-beating deformation area are determined by the minimum moving resistance. There are five types of metal flow rules of the deforming region in the process of cold roll-beating forming. The characteristics of involute spline cold roll-beating forming are given, and the forming mechanism of involute spline cold roll-beating is revealed. This paper researches the law of metal flow in the forming process of involute spline cold roll-beating, which provides theoretical supports for solving the tooth profile forming quality problem.

  13. Locally Refined Splines Representation for Geospatial Big Data

    NASA Astrophysics Data System (ADS)

    Dokken, T.; Skytt, V.; Barrowclough, O.

    2015-08-01

    When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines) allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.

  14. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    SciTech Connect

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  15. Polynomial interpolation methods for viscous flow calculations

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1977-01-01

    Higher-order collocation procedures which result in block-tridiagonal matrix systems are derived from (1) Taylor series expansions and from (2) polynomial interpolation, and the relationships between the two formulations, called respectively Hermite and spline collocation, are investigated. A Hermite block-tridiagonal system for a nonuniform mesh is derived, and the Hermite approach is extended in order to develop a variable-mesh sixth-order block-tridiagonal procedure. It is shown that all results obtained by Hermite development can be recovered by appropriate spline polynomial interpolation. The additional boundary conditions required for these higher-order procedures are also given. Comparative solutions using second-order accurate finite difference and spline and Hermite formulations are presented for the boundary layer on a flat plate, boundary layers with uniform and variable mass transfer, and the viscous incompressible Navier-Stokes equations describing flow in a driven cavity.

  16. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  17. Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting

    2016-01-01

    The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…

  18. Developing and Evaluating a Web-Based Collocation Retrieval Tool for EFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Hao-Jan Howard

    2011-01-01

    The development of adequate collocational knowledge is important for foreign language learners; nonetheless, learners often have difficulties in producing proper collocations in the target language. Among the various ways of learning collocations, the DDL (data-driven learning) approach encourages independent learning of collocations and allows…

  19. The Learning Burden of Collocations: The Role of Interlexical and Intralexical Factors

    ERIC Educational Resources Information Center

    Peters, Elke

    2016-01-01

    This study investigates whether congruency (+/- literal translation equivalent), collocate-node relationship (adjective-noun, verb-noun, phrasal-verb-noun collocations), and word length influence the learning burden of EFL learners' learning collocations at the initial stage of form-meaning mapping. Eighteen collocations were selected on the basis…

  20. The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners

    ERIC Educational Resources Information Center

    Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd

    2011-01-01

    An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge…

  1. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  2. Spline-locking screw fastening strategy

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1992-01-01

    A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotics or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced space manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.

  3. Spline-Locking Screw Fastening Strategy (SLSFS)

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1991-01-01

    A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotic or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.

  4. The spline probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Sithiravel, Rajiv; Tharmarasa, Ratnasingham; McDonald, Mike; Pelletier, Michel; Kirubarajan, Thiagalingam

    2012-06-01

    The Probability Hypothesis Density Filter (PHD) is a multitarget tracker for recursively estimating the number of targets and their state vectors from a set of observations. The PHD filter is capable of working well in scenarios with false alarms and missed detections. Two distinct PHD filter implementations are available in the literature: the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filters. The SMC-PHD filter uses particles to provide target state estimates, which can lead to a high computational load, whereas the GM-PHD filter does not use particles, but restricts to linear Gaussian mixture models. The SMC-PHD filter technique provides only weighted samples at discrete points in the state space instead of a continuous estimate of the probability density function of the system state and thus suffers from the well-known degeneracy problem. This paper proposes a B-Spline based Probability Hypothesis Density (S-PHD) filter, which has the capability to model any arbitrary probability density function. The resulting algorithm can handle linear, non-linear, Gaussian, and non-Gaussian models and the S-PHD filter can also provide continuous estimates of the probability density function of the system state. In addition, by moving the knots dynamically, the S-PHD filter ensures that the splines cover only the region where the probability of the system state is significant, hence the high efficiency of the S-PHD filter is maintained at all times. Also, unlike the SMC-PHD filter, the S-PHD filter is immune to the degeneracy problem due to its continuous nature. The S-PHD filter derivations and simulations are provided in this paper.

  5. Usability Study of Two Collocated Prototype System Displays

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2007-01-01

    Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.

  6. Numerical solution of fourth order boundary value problem using sixth degree spline functions

    NASA Astrophysics Data System (ADS)

    Kalyani, P.; Madhusudhan Rao, A. S.; Rao, P. S. Rama Chandra

    2015-12-01

    In this communication, we developed sixth degree spline functions by using Bickley's method for obtaining the numerical solution of linear fourth order differential equations of the form y(4)(x)+f(x)y(x) = r(x) with the given boundary conditions where f(x) and r(x) are given functions. Numerical illustrations are tabulated to demonstrate the practical usefulness of method.

  7. A Simple and Fast Spline Filtering Algorithm for Surface Metrology

    PubMed Central

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement.

  8. A Simple and Fast Spline Filtering Algorithm for Surface Metrology.

    PubMed

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement. PMID:26958443

  9. Subcell resolution in simplex stochastic collocation for spatial discontinuities

    SciTech Connect

    Witteveen, Jeroen A.S.; Iaccarino, Gianluca

    2013-10-15

    Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC–SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers’ equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC–SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.

  10. Subcell resolution in simplex stochastic collocation for spatial discontinuities

    NASA Astrophysics Data System (ADS)

    Witteveen, Jeroen A. S.; Iaccarino, Gianluca

    2013-10-01

    Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC-SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers' equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC-SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.

  11. Comparing measures of model selection for penalized splines in Cox models

    PubMed Central

    Malloy, Elizabeth J.; Spiegelman, Donna; Eisen, Ellen A.

    2009-01-01

    This article presents an application and a simulation study of model fit criteria for selecting the optimal degree of smoothness for penalized splines in Cox models. The criteria considered were the Akaike information criterion, the corrected AIC, two formulations of the Bayesian information criterion, and a generalized cross-validation method. The estimated curves selected by the five methods were compared to each other in a study of rectal cancer mortality in autoworkers. In the stimulation study, we estimated the fit of the penalized spline models in six exposure-response scenarios, using the five model fit criteria. The methods were compared based on a mean squared-error score and the power and size of hypothesis tests for any effect and for detecting nonlinearity. All comparisons were made across a range in the total sample size and number of cases. PMID:20161167

  12. Classifier calibration using splined empirical probabilities in clinical risk prediction.

    PubMed

    Gaudoin, Ren; Montana, Giovanni; Jones, Simon; Aylin, Paul; Bottle, Alex

    2015-06-01

    The aims of supervised machine learning (ML) applications fall into three broad categories: classification, ranking, and calibration/probability estimation. Many ML methods and evaluation techniques relate to the first two. Nevertheless, there are many applications where having an accurate probability estimate is of great importance. Deriving accurate probabilities from the output of a ML method is therefore an active area of research, resulting in several methods to turn a ranking into class probability estimates. In this manuscript we present a method, splined empirical probabilities, based on the receiver operating characteristic (ROC) to complement existing algorithms such as isotonic regression. Unlike most other methods it works with a cumulative quantity, the ROC curve, and as such can be tagged onto an ROC analysis with minor effort. On a diverse set of measures of the quality of probability estimates (Hosmer-Lemeshow, Kullback-Leibler divergence, differences in the cumulative distribution function) using simulated and real health care data, our approach compares favourably with the standard calibration method, the pool adjacent violators algorithm used to perform isotonic regression. PMID:24557734

  13. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts.

    PubMed

    Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio J D; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2013-10-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. PMID:24108269

  14. Techniques to improve the accuracy of presampling MTF measurement in digital X-ray imaging based on constrained spline regression.

    PubMed

    Zhou, Zhongxing; Zhu, Qingzhen; Zhao, Huijuan; Zhang, Lixin; Ma, Wenjuan; Gao, Feng

    2014-04-01

    To develop an effective curve-fitting algorithm with a regularization term for measuring the modulation transfer function (MTF) of digital radiographic imaging systems, in comparison with representative prior methods, a C-spline regression technique based upon the monotonicity and convex/concave shape restrictions of the edge spread function (ESF) was proposed for ESF estimation in this study. Two types of oversampling techniques and following four curve-fitting algorithms including the C-spline regression technique were considered for ESF estimation. A simulated edge image with a known MTF was used for accuracy determination of algorithms. Experimental edge images from two digital radiography systems were used for statistical evaluation of each curve-fitting algorithm on MTF measurements uncertainties. The simulation results show that the C-spline regression algorithm obtained a minimum MTF measurement error (an average error of 0.12% ± 0.11% and 0.18% ± 0.17% corresponding to two types of oversampling techniques, respectively, up to the cutoff frequency) among all curve-fitting algorithms. In the case of experimental edge images, the C-spline regression algorithm obtained the best uncertainty performance of MTF measurement among four curve-fitting algorithms for both the Pixarray-100 digital specimen radiography system and Hologic full-field digital mammography system. Comparisons among MTF estimates using four curve-fitting algorithms revealed that the proposed C-spline regression technique outperformed other algorithms on MTF measurements accuracy and uncertainty performance. PMID:24658257

  15. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  16. Construction of spline functions in spreadsheets to smooth experimental data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A previous manuscript detailed how spreadsheet software can be programmed to smooth experimental data via cubic splines. This addendum corrects a few errors in the previous manuscript and provides additional necessary programming steps. ...

  17. Detail view of redwood spline joinery of woodframe section against ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail view of redwood spline joinery of wood-frame section against adobe addition (measuring tape denotes plumb line from center of top board) - First Theatre in California, Southwest corner of Pacific & Scott Streets, Monterey, Monterey County, CA

  18. Analysis of myocardial motion using generalized spline models and tagged magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Rose, Stephen E.; Wilson, Stephen J.; Veidt, Martin; Bennett, Cameron J.; Doddrell, David M.

    2000-06-01

    Heart wall motion abnormalities are the very sensitive indicators of common heart diseases, such as myocardial infarction and ischemia. Regional strain analysis is especially important in diagnosing local abnormalities and mechanical changes in the myocardium. In this work, we present a complete method for the analysis of cardiac motion and the evaluation of regional strain in the left ventricular wall. The method is based on the generalized spline models and tagged magnetic resonance images (MRI) of the left ventricle. The whole method combines dynamical tracking of tag deformation, simulating cardiac movement and accurately computing the regional strain distribution. More specifically, the analysis of cardiac motion is performed in three stages. Firstly, material points within the myocardium are tracked over time using a semi-automated snake-based tag tracking algorithm developed for this purpose. This procedure is repeated in three orthogonal axes so as to generate a set of one-dimensional sample measurements of the displacement field. The 3D-displacement field is then reconstructed from this sample set by using a generalized vector spline model. The spline reconstruction of the displacement field is explicitly expressed as a linear combination of a spline kernel function associated with each sample point and a polynomial term. Finally, the strain tensor (linear or nonlinear) with three direct components and three shear components is calculated by applying a differential operator directly to the displacement function. The proposed method is computationally effective and easy to perform on tagged MR images. The preliminary study has shown potential advantages of using this method for the analysis of myocardial motion and the quantification of regional strain.

  19. High Accuracy Spline Explicit Group (SEG) Approximation for Two Dimensional Elliptic Boundary Value Problems

    PubMed Central

    Goh, Joan; Hj. M. Ali, Norhashidah

    2015-01-01

    Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG) iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method. PMID:26182211

  20. Positivity preserving using GC1 rational quartic spline

    NASA Astrophysics Data System (ADS)

    Abdul Karim, Samsul Ariffin; Pang, Kong Voon; Hashim, Ishak

    2013-04-01

    In this paper, we study the shape preserving interpolation for positive data using a rational quartic spline which has a quartic numerator and linear denominator. The rational quartic splines have GC1 continuity at join knots. Simple data dependent constraints are derived on the shape parameters in the description of the rational interpolant. Numerical comparison between the proposed scheme and the existing scheme is discussed. The results indicate that the proposed scheme works well for all tested data sets.

  1. Continuous Groundwater Monitoring Collocated at USGS Streamgages

    NASA Astrophysics Data System (ADS)

    Constantz, J. E.; Eddy-Miller, C.; Caldwell, R.; Wheeer, J.; Barlow, J.

    2012-12-01

    USGS Office of Groundwater funded a 2-year pilot study collocating groundwater wells for monitoring water level and temperature at several existing continuous streamgages in Montana and Wyoming, while U.S. Army Corps of Engineers funded enhancement to streamgages in Mississippi. To increase spatial relevance with in a given watershed, study sites were selected where near-stream groundwater was in connection with an appreciable aquifer, and where logistics and cost of well installations were considered representative. After each well installation and surveying, groundwater level and temperature were easily either radio-transmitted or hardwired to existing data acquisition system located in streamgaging shelter. Since USGS field personnel regularly visit streamgages during routine streamflow measurements and streamgage maintenance, the close proximity of observation wells resulted in minimum extra time to verify electronically transmitted measurements. After field protocol was tuned, stream and nearby groundwater information were concurrently acquired at streamgages and transmitted to satellite from seven pilot-study sites extending over nearly 2,000 miles (3,200 km) of the central US from October 2009 until October 2011, for evaluating the scientific and engineering add-on value of the enhanced streamgage design. Examination of the four-parameter transmission from the seven pilot study groundwater gaging stations reveals an internally consistent, dynamic data suite of continuous groundwater elevation and temperature in tandem with ongoing stream stage and temperature data. Qualitatively, the graphical information provides appreciation of seasonal trends in stream exchanges with shallow groundwater, as well as thermal issues of concern for topics ranging from ice hazards to suitability of fish refusia, while quantitatively this information provides a means for estimating flux exchanges through the streambed via heat-based inverse-type groundwater modeling. In June USGS Fact Sheet 2012-3054 was released online, summarizing the results of the pilot project.

  2. Recent advances in (soil moisture) triple collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Zwieback, S.; Crow, W.; Dorigo, W.; Wagner, W.

    2016-03-01

    To date, triple collocation (TC) analysis is one of the most important methods for the global-scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method. Different notations that are used to formulate the TC problem are shown to be mathematically identical. While many studies have investigated issues related to possible violations of the underlying assumptions, only few TC modifications have been proposed to mitigate the impact of these violations. Moreover, assumptions, which are often understood as a limitation that is unique to TC analysis are shown to be common also to other conventional performance metrics. Noteworthy advances in TC analysis have been made in the way error estimates are being presented by moving from the investigation of absolute error variance estimates to the investigation of signal-to-noise ratio (SNR) metrics. Here we review existing error presentations and propose the combined investigation of the SNR (expressed in logarithmic units), the unscaled error variances, and the soil moisture sensitivities of the data sets as an optimal strategy for the evaluation of remotely-sensed soil moisture data sets.

  3. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  4. Collocation and Pattern Recognition Effects on System Failure Remediation

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Press, Hayes N.

    2007-01-01

    Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.

  5. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    SciTech Connect

    Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  6. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2015-03-01

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov's proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  7. B-spline expansion of scattering equations for ionization of atomic hydrogen by antiproton impact

    NASA Astrophysics Data System (ADS)

    Azuma, J.; Toshima, N.; Hino, K.; Igarashi, A.

    2001-12-01

    We study ionization processes of atomic hydrogen by antiproton impact in the energy range of 0.1-500 keV by the close-coupling method based on the B-spline expansion. Superposition of piecewise B-spline functions enables us to express the continuum wave functions more flexibly than the traditional pseudostate representation, in which overall functions such as the Sturmian are used for the expansion. The present expansion also remedies the defect of the traditional one-center expansion that the ionization cross section is underestimated at low energies owing to the finite range of the pseudostates. Our ionization cross sections agree excellently with recent two-center calculations at all the energies. The electron probability densities are also presented in both the coordinate and the momentum spaces.

  8. B-spline goal-oriented error estimators for geometrically nonlinear rods

    NASA Astrophysics Data System (ADS)

    Dedè, L.; Santos, H. A. F. A.

    2012-01-01

    We consider goal-oriented a posteriori error estimators for the evaluation of the errors on quantities of interest associated with the solution of geometrically nonlinear curved elastic rods. For the numerical solution of these nonlinear one-dimensional problems, we adopt a B-spline based Galerkin method, a particular case of the more general isogeometric analysis. We propose error estimators using higher order "enhanced" solutions, which are based on the concept of enrichment of the original B-spline basis by means of the "pure" k-refinement procedure typical of isogeometric analysis. We provide several numerical examples for linear and nonlinear output functionals, corresponding to the rotation, displacements and strain energy of the rod, and we compare the effectiveness of the proposed error estimators.

  9. Beyond triple collocation: Applications to soil moisture monitoring

    NASA Astrophysics Data System (ADS)

    Su, Chun-Hsu; Ryu, Dongryeol; Crow, Wade T.; Western, Andrew W.

    2014-06-01

    Triple collocation (TC) is routinely used to resolve approximated linear relationships between different measurements (or representations) of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, validation, bias correction, and error characterization to allow comparisons of diverse data records from various direct and indirect measurement techniques including in situ remote sensing and model-based approaches. However, successful applications of TC require sufficiently large numbers of coincident data points from three independent time series and, within the analysis period, homogeneity of their linear relationships and error structures. These conditions are difficult to realize in practice due to infrequent spatiotemporal sampling of satellite and ground-based sensors. TC can, however, be generalized within the framework of instrumental variable (IV) regression theory to address some of the conceptual constraints of TC. We review the theoretics of IV and consider one possible strategy to circumvent the three-data constraint by use of lagged variables (LV) as instruments. This particular implementation of IV is suitable for circumstances where multiple data records are limited and the geophysical variable of interest is sampled at time intervals shorter than its temporal correlation length. As a demonstration of utility, the LV method is applied to microwave satellite soil moisture data sets to recover their errors over Australia and to estimate temporal properties of their relationships with in situ and model data. These results are compared against standard two-data linear estimators and the TC estimator as benchmark.

  10. Noise correction on LANDSAT images using a spline-like algorithm

    NASA Technical Reports Server (NTRS)

    Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.

    1985-01-01

    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.

  11. Advantage of collocating research facilities The administrator's point of view

    NASA Astrophysics Data System (ADS)

    Spilker, H.-M.; Blomeyer, C.

    1995-02-01

    Research facilities are collocated in order to create a maximum of synergy. This also requires a close cooperation of the administration concerned leading to advantages, in particular with regards to infrastructure and cost effectiveness. Faced with the specificities of the research facilities involved, administrators feel challenged to find appropriate solutions. The successive establishment of research institutes on the Polygone Scientifique in Grenoble is described. Forms and content of administrative collaboration between the Institut Max von Laue-Paul Langevin and the European Synchrotron Radiation Facility are analysed, where collocation has led to intensive cooperation.

  12. Polyharmonic smoothing splines and the multidimensional Wiener filtering of fractal-like signals.

    PubMed

    Tirosh, Shai; Van De Ville, Dimitri; Unser, Michael

    2006-09-01

    Motivated by the fractal-like behavior of natural images, we develop a smoothing technique that uses a regularization functional which is a fractional iterate of the Laplacian. This type of functional was initially introduced by Duchon for the approximation of nonuniformily sampled, multidimensional data. He proved that the general solution is a smoothing spline that is represented by a linear combination of radial basis functions (RBFs). Unfortunately, this is tedious to implement for images because of the poor conditioning of RBFs and their lack of decay. Here, we present a much more efficient method for the special case of a uniform grid. The key idea is to express Duchon's solution in a fractional polyharmonic B-spline basis that spans the same space as the RBFs. This allows us to derive an algorithm where the smoothing is performed by filtering in the Fourier domain. Next, we prove that the above smoothing spline can be optimally tuned to provide the MMSE estimation of a fractional Brownian field corrupted by white noise. This is a strong result that not only yields the best linear filter (Wiener solution), but also the optimal interpolation space, which is not bandlimited. It also suggests a way of using the noisy data to identify the optimal parameters (order of the spline and smoothing strength), which yields a fully automatic smoothing procedure. We evaluate the performance of our algorithm by comparing it against an oracle Wiener filter, which requires the knowledge of the true noiseless power spectrum of the signal. We find that our approach performs almost as well as the oracle solution over a wide range of conditions. PMID:16948307

  13. Algebraic grid generation using tensor product B-splines. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Saunders, B. V.

    1985-01-01

    Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.

  14. A mixed basis density functional approach for one-dimensional systems with B-splines

    NASA Astrophysics Data System (ADS)

    Ren, Chung-Yuan; Chang, Yia-Chung; Hsue, Chen-Shiung

    2016-05-01

    A mixed basis approach based on density functional theory is extended to one-dimensional (1D) systems. The basis functions here are taken to be the localized B-splines for the two finite non-periodic dimensions and the plane waves for the third periodic direction. This approach will significantly reduce the number of the basis and therefore is computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian. For 1D systems, B-spline polynomials are particularly useful and efficient in two-dimensional spatial integrations involved in the calculations because of their absolute localization. Moreover, B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With such a basis set we can directly calculate the total energy of the isolated system instead of using the conventional supercell model with artificial vacuum regions among the replicas along the two non-periodic directions. The spurious Coulomb interaction between the charged defect and its repeated images by the supercell approach for charged systems can also be avoided. A rigorous formalism for the long-range Coulomb potential of both neutral and charged 1D systems under the mixed basis scheme will be derived. To test the present method, we apply it to study the infinite carbon-dimer chain, graphene nanoribbon, carbon nanotube and positively-charged carbon-dimer chain. The resulting electronic structures are presented and discussed in detail.

  15. Converting an unstructured quadrilateral mesh to a standard T-spline surface

    NASA Astrophysics Data System (ADS)

    Wang, Wenyan; Zhang, Yongjie; Scott, Michael A.; Hughes, Thomas J. R.

    2011-10-01

    This paper presents a novel method for converting any unstructured quadrilateral mesh to a standard T-spline surface, which is C 2-continuous except for the local region around each extraordinary node. There are two stages in the algorithm: the topology stage and the geometry stage. In the topology stage, we take the input quadrilateral mesh as the initial T-mesh, design templates for each quadrilateral element type, and then standardize the T-mesh by inserting nodes. One of two sufficient conditions is derived to guarantee the generated T-mesh is gap-free around extraordinary nodes. To obtain a standard T-mesh, a second sufficient condition is provided to decide what T-mesh configuration yields a standard T-spline. These two sufficient conditions serve as a theoretical basis for our template development and T-mesh standardization. In the geometry stage, an efficient surface fitting technique is developed to improve the geometric accuracy. In addition, the surface continuity around extraordinary nodes can be improved by adjusting surrounding control nodes. The algorithm can also preserve sharp features in the input mesh, which are common in CAD (Computer Aided Design) models. Finally, a Bézier extraction technique is used to facilitate T-spline based isogeometric analysis. Several examples are tested to show the robustness of the algorithm.

  16. L2 Learner Production and Processing of Collocation: A Multi-Study Perspective

    ERIC Educational Resources Information Center

    Siyanova, Anna; Schmitt, Norbert

    2008-01-01

    This article presents a series of studies focusing on L2 production and processing of adjective-noun collocations (e.g., "social services"). In Study 1, 810 adjective-noun collocations were extracted from 31 essays written by Russian learners of English. About half of these collocations appeared frequently in the British National Corpus (BNC);

  17. L2 Learner Production and Processing of Collocation: A Multi-Study Perspective

    ERIC Educational Resources Information Center

    Siyanova, Anna; Schmitt, Norbert

    2008-01-01

    This article presents a series of studies focusing on L2 production and processing of adjective-noun collocations (e.g., "social services"). In Study 1, 810 adjective-noun collocations were extracted from 31 essays written by Russian learners of English. About half of these collocations appeared frequently in the British National Corpus (BNC);…

  18. An Exploratory Study of Collocational Use by ESL Students--A Task Based Approach

    ERIC Educational Resources Information Center

    Fan, May

    2009-01-01

    Collocation is an aspect of language generally considered arbitrary by nature and problematic to L2 learners who need collocational competence for effective communication. This study attempts, from the perspective of L2 learners, to have a deeper understanding of collocational use and some of the problems involved, by adopting a task based…

  19. Collocational Strategies of Arab Learners of English: A Study in Lexical Semantics.

    ERIC Educational Resources Information Center

    Muhammad, Raji Zughoul; Abdul-Fattah, Hussein S.

    Arab learners of English encounter a serious problem with collocational sequences. The present study purports to determine the extent to which university English language majors can use English collocations properly. A two-form translation test of 16 Arabic collocations was administered to both graduate and undergraduate students of English. The…

  20. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106 review... this Nationwide Collocation Programmatic Agreement in accordance with 36 CFR Section 800.14(b)(2)(iii... collocation being reviewed under the consultation process set forth under Subpart B of 36 CFR Part 800,...

  1. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106 review... this Nationwide Collocation Programmatic Agreement in accordance with 36 CFR Section 800.14(b)(2)(iii... collocation being reviewed under the consultation process set forth under Subpart B of 36 CFR Part 800,...

  2. Beyond Single Words: The Most Frequent Collocations in Spoken English

    ERIC Educational Resources Information Center

    Shin, Dongkwang; Nation, Paul

    2008-01-01

    This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section…

  3. The Effects of Vocabulary Learning on Collocation and Meaning

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2009-01-01

    This study investigates the effects of receptive and productive vocabulary tasks on learning collocation and meaning. Japanese English as a foreign language students learned target words in three glossed sentences and in a cloze task. To determine the effects of the treatments, four tests were used to measure receptive and productive knowledge of…

  4. Beyond triple collocation: Applications to satellite soil moisture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...

  5. Collocation with nonlinear programming for two-sided flight path optimization

    NASA Astrophysics Data System (ADS)

    Horie, Kazuhiro

    This research successfully develops a new numerical method for the problem of two-sided flight path optimization, that is, a method capable of finding trajectories satisfying the necessary condition of an open-loop representation of a saddle-point trajectory. The method of direct collocation with nonlinear programming is extended to find the solution of a zerosum two-person differential game by incorporating the analytical optimality condition for one player into the system equations. The new method is named semi-direct collocation with nonlinear programming (semi-DCNLP). We apply the new method to a variety of problems of increasing complexity; the dolichobrachistochrone, a problem of ballistic interception, the homicidal chauffeur problem and minimum-time spacecraft interception for optimally evasive target, and thus verify that the method is capable of identifying saddle-point trajectories. While the method is quite robust, ambitious problems require a reasonable initial guess of the discretized solution from which the optimizer may converge. A method for generating a good initial guess, requiring no a priori information about the solution, is developed using genetic algorithms. The semi-DCNLP, in combination with the genetic algorithm-based preprocessor, is then used to solve a very complicated pursuit-evasion problem; optimal air combat for realistic fighter aircraft models in three dimensions. Characteristics of the optimal air combat maneuvers for both aircraft are identified for many different initial conditions.

  6. How to fly an aircraft with control theory and splines

    NASA Technical Reports Server (NTRS)

    Karlsson, Anders

    1994-01-01

    When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.

  7. Reconstruction of egg shape using B-spline

    NASA Astrophysics Data System (ADS)

    Roslan, Nurshazneem; Yahya, Zainor Ridzuan

    2015-05-01

    In this paper, the reconstruction of egg's outline by using piecewise parametric cubic B-spline is proposed. Reverse engineering process has been used to represent the generic shape of egg in order to acquire its geometric information in form of a two-dimensional set of points. For the curve reconstruction, the purpose is to optimize the control points of all curves such that the distance of the data points to the curve is minimized. Then, the B-spline curve functions were used for the curve fitting between the actual and the reconstructed profiles.

  8. Near minimally normed spline quasi-interpolants on uniform partitions

    NASA Astrophysics Data System (ADS)

    Barrera, D.; Ibanez, M. J.; Sablonniere, P.; Sbibih, D.

    2005-09-01

    Spline quasi-interpolants (QIs) are local approximating operators for functions or discrete data. We consider the construction of discrete and integral spline QIs on uniform partitions of the real line having small infinity norms. We call them near minimally normed QIs: they are exact on polynomial spaces and minimize a simple upper bound of their infinity norms. We give precise results for cubic and quintic QIs. Also the QI error is considered, as well as the advantage that these QIs present when approximating functions with isolated discontinuities.

  9. The radial basis function finite collocation approach for capturing sharp fronts in time dependent advection problems

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Power, H.

    2015-10-01

    We propose a node-based local meshless method for advective transport problems that is capable of operating on centrally defined stencils and is suitable for shock-capturing purposes. High spatial convergence rates can be achieved; in excess of eighth-order in some cases. Strongly-varying smooth profiles may be captured at infinite Péclet number without instability, and for discontinuous profiles the solution exhibits neutrally stable oscillations that can be damped by introducing a small artificial diffusion parameter, allowing a good approximation to the shock-front to be maintained for long travel times without introducing spurious oscillations. The proposed method is based on local collocation with radial basis functions (RBFs) in a "finite collocation" configuration. In this approach the PDE governing and boundary equations are enforced directly within the local RBF collocation systems, rather than being reconstructed from fixed interpolating functions as is typical of finite difference, finite volume or finite element methods. In this way the interpolating basis functions naturally incorporate information from the governing PDE, including the strength and direction of the convective velocity field. By using these PDE-enhanced interpolating functions an "implicit upwinding" effect is achieved, whereby the flow of information naturally respects the specifics of the local convective field. This implicit upwinding effect allows high-convergence solutions to be obtained on centred stencils for advection problems. The method is formulated using a high-convergence implicit timestepping algorithm based on Richardson extrapolation. The spatial and temporal convergence of the proposed approach is demonstrated using smooth functions with large gradients. The capture of discontinuities is then investigated, showing how the addition of a dynamic stabilisation parameter can damp the neutrally stable oscillations with limited smearing of the shock front.

  10. Matrix decomposition algorithms for the finite element Galerkin method with piecewise Hermite cubics

    NASA Astrophysics Data System (ADS)

    Bialecki, Bernard; Fairweather, Graeme; Knudson, David; Lipman, D.; Nguyen, Que; Sun, Weiwei; Weinberg, Gadalia

    2009-09-01

    Matrix decomposition algorithms (MDAs) employing fast Fourier transforms are developed for the solution of the systems of linear algebraic equations arising when the finite element Galerkin method with piecewise Hermite bicubics is used to solve Poisson's equation on the unit square. Like their orthogonal spline collocation counterparts, these MDAs, which require O(N 2logN) operations on an N×N uniform partition, are based on knowledge of the solution of a generalized eigenvalue problem associated with the corresponding discretization of a two-point boundary value problem. The eigenvalues and eigenfunctions are determined for various choices of boundary conditions, and numerical results are presented to demonstrate the efficacy of the MDAs.

  11. Higher-order numerical methods derived from three-point polynomial interpolation

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and Hermitian finite-difference discretization. The equations generally apply for both uniform and variable meshes. Hybrid schemes resulting from different polynomial approximations for first and second derivatives lead to the nonuniform mesh extension of the so-called compact or Pade difference techniques. A variety of fourth-order methods are described and this concept is extended to sixth-order. Solutions with these procedures are presented for the similar and non-similar boundary layer equations with and without mass transfer, the Burgers equation, and the incompressible viscous flow in a driven cavity. Finally, the interpolation procedure is used to derive higher-order temporal integration schemes and results are shown for the diffusion equation.

  12. BOX SPLINE BASED 3D TOMOGRAPHIC RECONSTRUCTION OF DIFFUSION PROPAGATORS FROM MRI DATA

    PubMed Central

    Ye, Wenxing; Portnoy, Sharon; Entezari, Alireza; Vemuri, Baba C.; Blackband, Stephen J.

    2011-01-01

    This paper introduces a tomographic approach for reconstruction of diffusion propagators, P(r), in a box spline framework. Box splines are chosen as basis functions for high-order approximation of P(r) from the diffusion signal. Box splines are a generalization of B-splines to multivariate setting that are particularly useful in the context of tomographic reconstruction. The X-Ray or Radon transform of a (tensor-product B-spline or a non-separable) box spline is a box spline – the space of box splines is closed under the Radon transform. We present synthetic and real multi-shell diffusion-weighted MR data experiments that demonstrate the increased accuracy of P(r) reconstruction as the order of basis functions is increased. PMID:23459604

  13. Temi firthiani di linguistica applicata: "Restricted Languages" e "Collocation" (Firthian Themes in Applied Linguistics: "Restricted Languages" and "Collocation")

    ERIC Educational Resources Information Center

    Leonardi, Magda

    1977-01-01

    Discusses the importance of two Firthian themes for language teaching. The first theme, "Restricted Languages," concerns the "microlanguages" of every language (e.g., literary language, scientific, etc.). The second theme, "Collocation," shows that equivalent words in two languages rarely have the same position in both languages. (Text is in…

  14. L1 Influence on the Acquisition of L2 Collocations: Japanese ESL Users and EFL Learners Acquiring English Collocations

    ERIC Educational Resources Information Center

    Yamashita, Junko; Jiang, Nan

    2010-01-01

    This study investigated first language (L1) influence on the acquisition of second language (L2) collocations using a framework based on Kroll and Stewart (1994) and Jiang (2000), by comparing the performance on a phrase-acceptability judgment task among native speakers of English, Japanese English as a second language (ESL) users, and Japanese…

  15. Modelling Of Displacement Washing Of Pulp Bed Using Orthogonal Collocation On Finite Elements

    NASA Astrophysics Data System (ADS)

    Arora, Shelly; Potůček, František; Dhaliwal, S. S.; Kukreja, V. K.

    2009-07-01

    Mechanism of displacement washing of packed bed of porous, compressible and cylindrical particles, e.g., fibers is presented with the help of an axial dispersion model involving Peclet number (Pe) and Biot number (Bi). Bulk fluid concentration and intra-pore solute concentration are related by Langmuir adsorption isotherm. Model equations have been solved using orthogonal collocation on finite elements using Lagrangian interpolating polynomials as base functions. Displacement washing has been simulated using a laboratory washing cell and experiments have been performed on pulp beds formed from unbeaten, unbleached kraft fibers. Model predicted values have been compared with experimental values to check the applicability of the method.

  16. Radial Splines Would Prevent Rotation Of Bearing Race

    NASA Technical Reports Server (NTRS)

    Kaplan, Ronald M.; Chokshi, Jaisukhlal V.

    1993-01-01

    Interlocking fine-pitch ribs and grooves formed on otherwise flat mating end faces of housing and outer race of rolling-element bearing to be mounted in housing, according to proposal. Splines bear large torque loads and impose minimal distortion on raceway.

  17. Quantifying cervical-spine curvature using Bézier splines.

    PubMed

    Klinich, Kathleen D; Ebert, Sheila M; Reed, Matthew P

    2012-11-01

    Knowledge of the distributions of cervical-spine curvature is needed for computational studies of cervical-spine injury in motor-vehicle crashes. Many methods of specifying spinal curvature have been proposed, but they often involve qualitative assessment or a large number of parameters. The objective of this study was to develop a quantitative method of characterizing cervical-spine curvature using a small number of parameters. 180 sagittal X-rays of subjects seated in automotive posture with their necks in neutral, flexed, and extended postures were collected in the early 1970s. Subjects were selected to represent a range of statures and ages for each gender. X-rays were reanalyzed using advanced technology and statistical methods. Coordinates of the posterior margins of the vertebral bodies and dens were digitized. Bézier splines were fit through the coordinates of these points. The interior control points that define the spline curvature were parameterized as a vector angle and length. By defining the length as a function of the angle, cervical-spine curvature was defined with just two parameters: superior and inferior Bézier angles. A classification scheme was derived to sort each curvature by magnitude and type of curvature (lordosis versus S-shaped versus kyphosis; inferior or superior location). Cervical-spine curvature in an automotive seated posture varies with gender and age but not stature. Average values of superior and inferior Bézier angles for cervical spines in flexion, neutral, and extension automotive postures are presented for each gender and age group. Use of Bézier splines fit through posterior margins offers a quantitative method of characterizing cervical-spine curvature using two parameters: superior and inferior Bézier angles. PMID:23387791

  18. Adaptive Multilevel Second-Generation Wavelet Collocation Elliptic Solver: A Cure for High Viscosity Contrasts

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. N.; Vasilyev, O. V.; Yuen, D. A.

    2003-12-01

    An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is developed. The method is based on the general class of multi-dimensional second generation wavelets and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems. Wavelet decomposition is used for grid adaptation and interpolation, while O(N) hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the iterative solver, an iterative procedure analogous to the multigrid algorithm is developed. For the problems with slowly varying viscosity simple diagonal preconditioning works. For problems with large laterally varying viscosity contrasts either direct solver on shared-memory machines or multilevel iterative solver with incomplete LU preconditioner may be used. The method is demonstrated for the solution of a number of two-dimensional elliptic test problems with both constant and spatially varying viscosity with multiscale character.

  19. A coupled ensemble filtering and probabilistic collocation approach for uncertainty quantification of hydrological models

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, W. W.; Li, Y. P.; Huang, G. H.; Huang, K.

    2015-11-01

    In this study, a coupled ensemble filtering and probabilistic collocation (EFPC) approach is proposed for uncertainty quantification of hydrologic models. This approach combines the capabilities of the ensemble Kalman filter (EnKF) and the probabilistic collocation method (PCM) to provide a better treatment of uncertainties in hydrologic models. The EnKF method would be employed to approximate the posterior probabilities of model parameters and improve the forecasting accuracy based on the observed measurements; the PCM approach is proposed to construct a model response surface in terms of the posterior probabilities of model parameters to reveal uncertainty propagation from model parameters to model outputs. The proposed method is applied to the Xiangxi River, located in the Three Gorges Reservoir area of China. The results indicate that the proposed EFPC approach can effectively quantify the uncertainty of hydrologic models. Even for a simple conceptual hydrological model, the efficiency of EFPC approach is about 10 times faster than traditional Monte Carlo method without obvious decrease in prediction accuracy. Finally, the results can explicitly reveal the contributions of model parameters to the total variance of model predictions during the simulation period.

  20. Radiation energy budget studies using collocated AVHRR and ERBE observations

    SciTech Connect

    Ackerman, S.A.; Inoue, Toshiro

    1994-03-01

    Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert. 25 refs., 8 figs.

  1. Radiation energy budget studies using collocated AVHRR and ERBE observations

    NASA Technical Reports Server (NTRS)

    Ackerman, Steven A.; Inoue, Toshiro

    1994-01-01

    Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert.

  2. Evaluation of the spline reconstruction technique for PET

    SciTech Connect

    Kastis, George A. Kyriakopoulou, Dimitra; Gaitanis, Anastasios; Fernández, Yolanda; Hutton, Brian F.; Fokas, Athanasios S.

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real point-source studies. In all simulated phantoms, the SRT exhibits higher contrast and lower bias than FBP at all noise levels, by increasing the COV in the reconstructed images. Finally, in real studies, whereas the contrast of the cold chambers are similar for both algorithms, the SRT reconstructed images of the NEMA phantom exhibit slightly higher COV values than those of FBP. In the Derenzo phantom, SRT resolves the 2-mm separated holes slightly better than FBP. The small-animal and human reconstructions via SRT exhibit slightly higher resolution and contrast than the FBP reconstructions. Conclusions: The SRT provides images of higher resolution, higher contrast, and lower bias than FBP, by increasing slightly the noise in the reconstructed images. Furthermore, it eliminates streak artifacts outside the object boundary. Unlike other analytic algorithms, the reconstruction time of SRT is comparable with that of FBP. The source code for SRT will become available in a future release of STIR.

  3. A Corpus-Based Study of the Linguistic Features and Processes Which Influence the Way Collocations Are Formed: Some Implications for the Learning of Collocations

    ERIC Educational Resources Information Center

    Walker, Crayton Phillip

    2011-01-01

    In this article I examine the collocational behaviour of groups of semantically related verbs (e.g., "head, run, manage") and nouns (e.g., "issue, factor, aspect") from the domain of business English. The results of this corpus-based study show that much of the collocational behaviour exhibited by these lexical items can be explained by examining…

  4. A Corpus-Based Study of the Linguistic Features and Processes Which Influence the Way Collocations Are Formed: Some Implications for the Learning of Collocations

    ERIC Educational Resources Information Center

    Walker, Crayton Phillip

    2011-01-01

    In this article I examine the collocational behaviour of groups of semantically related verbs (e.g., "head, run, manage") and nouns (e.g., "issue, factor, aspect") from the domain of business English. The results of this corpus-based study show that much of the collocational behaviour exhibited by these lexical items can be explained by examining

  5. Low speed wind tunnel investigation of span load alteration, forward-located spoilers, and splines as trailing-vortex-hazard alleviation devices on a transport aircraft model

    NASA Technical Reports Server (NTRS)

    Croom, D. R.; Dunham, R. E., Jr.

    1975-01-01

    The effectiveness of a forward-located spoiler, a spline, and span load alteration due to a flap configuration change as trailing-vortex-hazard alleviation methods was investigated. For the transport aircraft model in the normal approach configuration, the results indicate that either a forward-located spoiler or a spline is effective in reducing the trailing-vortex hazard. The results also indicate that large changes in span loading, due to retraction of the outboard flap, may be an effective method of reducing the trailing-vortex hazard.

  6. BSR: B-spline atomic R-matrix codes

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg

    2006-02-01

    BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput. Phys. Comm. 92 (1995) 290].

  7. Composite multi-modal vibration control for a stiffened plate using non-collocated acceleration sensor and piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Li, Shengquan; Li, Juan; Mo, Yueping; Zhao, Rong

    2014-01-01

    A novel active method for multi-mode vibration control of an all-clamped stiffened plate (ACSP) is proposed in this paper, using the extended-state-observer (ESO) approach based on non-collocated acceleration sensors and piezoelectric actuators. Considering the estimated capacity of ESO for system state variables, output superposition and control coupling of other modes, external excitation, and model uncertainties simultaneously, a composite control method, i.e., the ESO based vibration control scheme, is employed to ensure the lumped disturbances and uncertainty rejection of the closed-loop system. The phenomenon of phase hysteresis and time delay, caused by non-collocated sensor/actuator pairs, degrades the performance of the control system, even inducing instability. To solve this problem, a simple proportional differential (PD) controller and acceleration feed-forward with an output predictor design produce the control law for each vibration mode. The modal frequencies, phase hysteresis loops and phase lag values due to non-collocated placement of the acceleration sensor and piezoelectric patch actuator are experimentally obtained, and the phase lag is compensated by using the Smith Predictor technology. In order to improve the vibration control performance, the chaos optimization method based on logistic mapping is employed to auto-tune the parameters of the feedback channel. The experimental control system for the ACSP is tested using the dSPACE real-time simulation platform. Experimental results demonstrate that the proposed composite active control algorithm is an effective approach for suppressing multi-modal vibrations.

  8. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.

    PubMed

    Kainu, Annette; Timonen, Kirsi

    2016-07-01

    Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values. PMID:27071737

  9. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  10. Defining window-boundaries for genomic analyses using smoothing spline techniques

    SciTech Connect

    Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia

    2015-04-17

    High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the data and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.

  11. Defining window-boundaries for genomic analyses using smoothing spline techniques

    DOE PAGESBeta

    Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia

    2015-04-17

    High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore » and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less

  12. B-spline algorithm for magnetized multielectron atomic structures

    NASA Astrophysics Data System (ADS)

    Zhao, L. B.; Stancil, P. C.

    2008-03-01

    A B-spline algorithm has been developed to evaluate the electronic structure of multielectron atoms in a magnetic field. A generalized electron configuration concept, which is crucial to perform the current investigation, was introduced to solve Hartree-Fock equations. The wave functions for electron orbitals of the magnetized multielectron atom are expanded in terms of a B-spline basis in the radial direction and spherical harmonics in the angular direction. The developed algorithm has been applied to calculations of He in a magnetic field. Energy levels of magnetized He in the ground state are presented as a function of magnetic field strength with a range from zero up to 2.35×107T and compared with available theoretical data.

  13. Monotonicity preserving splines using rational Ball cubic interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Jamal, Ena; Ali, Jamaludin Md.

    2015-10-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data which preserves certain shape properties of the data such as positivity, monotonicity or convexity [1]. The required curves have to be a smooth shape-preserving interpolation. In this paper a rational cubic spline in Ball representation is developed to generate an interpolant that preserves monotonicity. In this paper to control the shape of the interpolant three shape parameters are introduced. The shape parameters in the description of the rational cubic interpolation are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolation are derived and visually the proposed rational cubic interpolant gives a very pleasing result.

  14. On the role of exponential splines in image interpolation.

    PubMed

    Kirshner, Hagai; Porat, Moshe

    2009-10-01

    A Sobolev reproducing-kernel Hilbert space approach to image interpolation is introduced. The underlying kernels are exponential functions and are related to stochastic autoregressive image modeling. The corresponding image interpolants can be implemented effectively using compactly-supported exponential B-splines. A tight l(2) upper-bound on the interpolation error is then derived, suggesting that the proposed exponential functions are optimal in this regard. Experimental results indicate that the proposed interpolation approach with properly-tuned, signal-dependent weights outperforms currently available polynomial B-spline models of comparable order. Furthermore, a unified approach to image interpolation by ideal and nonideal sampling procedures is derived, suggesting that the proposed exponential kernels may have a significant role in image modeling as well. Our conclusion is that the proposed Sobolev-based approach could be instrumental and a preferred alternative in many interpolation tasks. PMID:19520639

  15. Application of integrodifferential splines to solving an interpolation problem

    NASA Astrophysics Data System (ADS)

    Burova, I. G.; Rodnikova, O. V.

    2014-12-01

    This paper deals with cases when the values of derivatives of a function are given at grid nodes or the values of integrals of a function over grid intervals are known. Polynomial and trigonometric integrodifferential splines for computing the value of a function from given values of its nodal derivatives and/or from its integrals over grid intervals are constructed. Error estimates are obtained, and numerical results are presented.

  16. Control theory and splines, applied to signature storage

    NASA Technical Reports Server (NTRS)

    Enqvist, Per

    1994-01-01

    In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.

  17. Stiffness calculation and application of spline-ball bearing

    NASA Astrophysics Data System (ADS)

    Gu, Bo-Zhong; Zhou, Yu-Ming; Yang, De-Hua

    2006-12-01

    Spline-ball bearing is widely adopted in large precision instruments because of its distinctive performance. For the sake of carrying out detail investigation of a full instrument system, practical stiffness formulae of such bearing are introduced with elastic contact mechanics, which are successfully applied for calculating the stiffness of the bearing used in astronomical telescope. Appropriate treatment of the stiffness of such bearing in the finite element analysis is also discussed and illustrated.

  18. Modified simple formulation on a collocated grid with an assessment of the simplified QUICK scheme

    SciTech Connect

    Rahman, M.M.; Miettinen, A.; Siikonen, T.

    1996-10-01

    The simplified QUICK scheme (transverse curvature terms are neglected) is extended to a nonuniform, rectangular, collocated grid system for the solution of two-dimensional fluid flow problems using a vertex-based finite-volume approximation. The influence of the non-pressure gradient source term is added to the Rhie-Chow interpolation method, and a local-mode Fourier analysis of the modified scheme demonstrates that characteristically, it is strongly elliptic and has a high-frequency damping capability, which effectively eliminates the grid-scale pressure oscillations. Within this framework, the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) iteration procedure is constructed. A comparison between the present method and the control-volume-based finite-element method (CVFEM) with vorticity-stream function formulation for free convection in a cavity indicates that the proposed scheme can be applied successfully to fluid flow and heat transfer problems.

  19. Explicit B-spline regularization in diffeomorphic image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  20. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  1. Spline Driven: High Accuracy Projectors for Tomographic Reconstruction From Few Projections.

    PubMed

    Momey, Fabien; Denis, Loïc; Burnier, Catherine; Thiébaut, Éric; Becker, Jean-Marie; Desbat, Laurent

    2015-12-01

    Tomographic iterative reconstruction methods need a very thorough modeling of data. This point becomes critical when the number of available projections is limited. At the core of this issue is the projector design, i.e., the numerical model relating the representation of the object of interest to the projections on the detector. Voxel driven and ray driven projection models are widely used for their short execution time in spite of their coarse approximations. Distance driven model has an improved accuracy but makes strong approximations to project voxel basis functions. Cubic voxel basis functions are anisotropic, accurately modeling their projection is, therefore, computationally expensive. Both smoother and more isotropic basis functions better represent the continuous functions and provide simpler projectors. These considerations have led to the development of spherically symmetric volume elements, called blobs. Set apart their isotropy, blobs are often considered too computationally expensive in practice. In this paper, we consider using separable B-splines as basis functions to represent the object, and we propose to approximate the projection of these basis functions by a 2D separable model. When the degree of the B-splines increases, their isotropy improves and projections can be computed regardless of their orientation. The degree and the sampling of the B-splines can be chosen according to a tradeoff between approximation quality and computational complexity. We quantitatively measure the good accuracy of our model and compare it with other projectors, such as the distance-driven and the model proposed by Long et al. From the numerical experiments, we demonstrate that our projector with an improved accuracy better preserves the quality of the reconstruction as the number of projections decreases. Our projector with cubic B-splines requires about twice as many operations as a model based on voxel basis functions. Higher accuracy projectors can be used to improve the resolution of the existing systems, or to reduce the number of projections required to reach a given resolution, potentially reducing the dose absorbed by the patient. PMID:26259217

  2. Minimal multi-element stochastic collocation for uncertainty quantification of discontinuous functions

    NASA Astrophysics Data System (ADS)

    Jakeman, John D.; Narayan, Akil; Xiu, Dongbin

    2013-06-01

    We propose a multi-element stochastic collocation method that can be applied in high-dimensional parameter space for functions with discontinuities lying along manifolds of general geometries. The key feature of the method is that the parameter space is decomposed into multiple elements defined by the discontinuities and thus only the minimal number of elements are utilized. On each of the resulting elements the function is smooth and can be approximated using high-order methods with fast convergence properties. The decomposition strategy is in direct contrast to the traditional multi-element approaches which define the sub-domains by repeated splitting of the axes in the parameter space. Such methods are more prone to the curse-of-dimensionality because of the fast growth of the number of elements caused by the axis based splitting. The present method is a two-step approach. Firstly a discontinuity detector is used to partition parameter space into disjoint elements in each of which the function is smooth. The detector uses an efficient combination of the high-order polynomial annihilation technique along with adaptive sparse grids, and this allows resolution of general discontinuities with a smaller number of points when the discontinuity manifold is low-dimensional. After partitioning, an adaptive technique based on the least orthogonal interpolant is used to construct a generalized Polynomial Chaos surrogate on each element. The adaptive technique reuses all information from the partitioning and is variance-suppressing. We present numerous numerical examples that illustrate the accuracy, efficiency, and generality of the method. When compared against standard locally-adaptive sparse grid methods, the present method uses many fewer number of collocation samples and is more accurate.

  3. Numerical aspects of a spline-based multiresolution recovery of the harmonic mass density out of gravity functionals

    NASA Astrophysics Data System (ADS)

    Michel, Volker; Wolf, Kerstin

    2008-04-01

    We show the numerical applicability of a multiresolution method based on harmonic splines on the 3-D ball which allows the regularized recovery of the harmonic part of the Earth's mass density distribution out of different types of gravity data, for example, different radial derivatives of the potential, at various positions which need not be located on a common sphere. This approximated harmonic density can be combined with its orthogonal anharmonic complement, for example, determined out of the splitting function of free oscillations, to an approximation of the whole mass density function. The applicability of the presented tool is demonstrated by several test calculations based on simulated gravity values derived from EGM96. The method yields a multiresolution in the sense that the localization of the constructed spline basis functions can be increased which yields in combination with more data a higher resolution of the resulting spline. Moreover, we show that a locally improved data situation allows a highly resolved recovery in this particular area in combination with a coarse approximation elsewhere which is an essential advantage of this method, for example, compared to polynomial approximation.

  4. Lqg/ga Design of Active Noise Controllers for a Collocated Acoustic Duct System

    NASA Astrophysics Data System (ADS)

    LIN, JONG-YIH; SHEU, HORNG-YIH; CHAO, SHIH-CHENG

    1999-12-01

    Active noise control of an acoustic duct system is studied by a real state-space model in this paper. The linear quadratic Gaussian (LQG) method is chosen to design an active noise controller in order to reject noise in a collocated duct system subject to a disturbance source at one end. Robustness property of the designed controller with respect to the uncertainty of a complex-valued acoustic impedance at the other end is validated through computer simulations. A nominal real-valued acoustic impedance is therefore used to design reduced order controllers. The design parameters of the LQG method are automatically adjusted by using a simple genetic algorithm (SGA) to achieve a better global control effect. This adjustment is guided by a fitness function of SGA specified by a control objective. Results from computer simulation demonstrate the global effectiveness of the active noise controllers. Results of experiments also support the feasibility of the proposed design method.

  5. Flight tests of vortex-attenuating splines

    NASA Technical Reports Server (NTRS)

    Patterson, J. C., Jr.

    1974-01-01

    Visual data on formation and motion of lift-induced wingtip vortex were obtained by stationary, airflow visualization method. Visual data indicated that vortex cannot be eliminated by merely reshaping wingtip. Configuration change will likely have only small effect on far-field flow.

  6. Stochastic Sparse-Grid Collocation Algorithm for Steady-State Analysis of Nonlinear System with Process Variations

    NASA Astrophysics Data System (ADS)

    Tao, Jun; Zeng, Xuan; Cai, Wei; Su, Yangfeng; Zhou, Dian

    In this paper, a Stochastic Collocation Algorithm combined with Sparse Grid technique (SSCA) is proposed to deal with the periodic steady-state analysis for nonlinear systems with process variations. Compared to the existing approaches, SSCA has several considerable merits. Firstly, compared with the moment-matching parameterized model order reduction (PMOR) which equally treats the circuit response on process variables and frequency parameter by Taylor approximation, SSCA employs Homogeneous Chaos to capture the impact of process variations with exponential convergence rate and adopts Fourier series or Wavelet Bases to model the steady-state behavior in time domain. Secondly, contrary to Stochastic Galerkin Algorithm (SGA), which is efficient for stochastic linear system analysis, the complexity of SSCA is much smaller than that of SGA for nonlinear case. Thirdly, different from Efficient Collocation Method, the heuristic approach which may result in “Rank deficient problem” and “Runge phenomenon,” Sparse Grid technique is developed to select the collocation points needed in SSCA in order to reduce the complexity while guaranteing the approximation accuracy. Furthermore, though SSCA is proposed for the stochastic nonlinear steady-state analysis, it can be applied to any other kind of nonlinear system simulation with process variations, such as transient analysis, etc..

  7. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  8. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  9. Railroad inspection based on ACFM employing a non-uniform B-spline approach

    NASA Astrophysics Data System (ADS)

    Chacón Muñoz, J. M.; García Márquez, F. P.; Papaelias, M.

    2013-11-01

    The stresses sustained by rails have increased in recent years due to the use of higher train speeds and heavier axle loads. For this reason surface and near-surface defects generate by Rolling Contact Fatigue (RCF) have become particularly significant as they can cause unexpected structural failure of the rail, resulting in severe derailments. The accident that took place in Hatfield, UK (2000), is an example of a derailment caused by the structural failure of a rail section due to RCF. Early detection of RCF rail defects is therefore of paramount importance to the rail industry. The performance of existing ultrasonic and magnetic flux leakage techniques in detecting rail surface-breaking defects, such as head checks and gauge corner cracking, is inadequate during high-speed inspection, while eddy current sensors suffer from lift-off effects. The results obtained through rail inspection experiments under simulated conditions using Alternating Current Field Measurement (ACFM) probes, suggest that this technique can be applied for the accurate and reliable detection of surface-breaking defects at high inspection speeds. This paper presents the B-Spline approach used for the accurate filtering the noise of the raw ACFM signal obtained during high speed tests to improve the reliability of the measurements. A non-uniform B-spline approximation is employed to calculate the exact positions and the dimensions of the defects. This method generates a smooth approximation similar to the ACFM dataset points related to the rail surface-breaking defect.

  10. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    SciTech Connect

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  11. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    NASA Astrophysics Data System (ADS)

    Alwan, Aravind; Aluru, N. R.

    2013-12-01

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  12. Multi-element probabilistic collocation for sensitivity analysis in cellular signalling networks.

    PubMed

    Foo, J; Sindi, S; Karniadakis, G E

    2009-07-01

    The multi-element probabilistic collocation method (ME-PCM) as a tool for sensitivity analysis of differential equation models as applied to cellular signalling networks is formulated. This method utilises a simple, efficient sampling algorithm to quantify local sensitivities throughout the parameter space. The application of the ME-PCM to a previously published ordinary differential equation model of the apoptosis signalling network is presented. The authors verify agreement with the previously identified regions of sensitivity and then go on to analyse this region in greater detail with the ME-PCM. The authors demonstrate the generality of the ME-PCM by studying sensitivity of the system using a variety of biologically relevant markers in the system such as variation in one (or many) chemical species as a function of time, and total exposure of a single chemical species. PMID:19640163

  13. Application of a polynomial spline in higher-order accurate viscous-flow computations

    NASA Technical Reports Server (NTRS)

    Turner, M. G.; Keith, J. S.; Ghia, K. N.; Ghia, U.

    1982-01-01

    A quartic spline, S(4,2), is proposed which overcomes some of the difficulties associated with the use of splines S(5,3) and S(3,1) and provides fourth-order accurate results with relatively few grid points. The accuracy of spline S(4,2) is comparable to or better than that of the fourth-order box scheme and the compact differencing scheme. The use of spline S(4,2) is suggested as a possible way of obtaining fourth-order accurate solutions to Navier-Stokes equations.

  14. A numerical solution of the linear Boltzmann equation using cubic B-splines.

    PubMed

    Khurana, Saheba; Thachuk, Mark

    2012-03-01

    A numerical method using cubic B-splines is presented for solving the linear Boltzmann equation. The collision kernel for the system is chosen as the Wigner-Wilkins kernel. A total of three different representations for the distribution function are presented. Eigenvalues and eigenfunctions of the collision matrix are obtained for various mass ratios and compared with known values. Distribution functions, along with first and second moments, are evaluated for different mass and temperature ratios. Overall it is shown that the method is accurate and well behaved. In particular, moments can be predicted with very few points if the representation is chosen well. This method produces sparse matrices, can be easily generalized to higher dimensions, and can be cast into efficient parallel algorithms. PMID:22401425

  15. Tensor Splines for Interpolation and Approximation of DT-MRI With Applications to Segmentation of Isolated Rat Hippocampi

    PubMed Central

    Barmpoutis, Angelos; Shepherd, Timothy M.; Forder, John R.

    2009-01-01

    In this paper, we present novel algorithms for statistically robust interpolation and approximation of diffusion tensors—which are symmetric positive definite (SPD) matrices—and use them in developing a significant extension to an existing probabilistic algorithm for scalar field segmentation, in order to segment diffusion tensor magnetic resonance imaging (DT-MRI) datasets. Using the Riemannian metric on the space of SPD matrices, we present a novel and robust higher order (cubic) continuous tensor product of B-splines algorithm to approximate the SPD diffusion tensor fields. The resulting approximations are appropriately dubbed tensor splines. Next, we segment the diffusion tensor field by jointly estimating the label (assigned to each voxel) field, which is modeled by a Gauss Markov measure field (GMMF) and the parameters of each smooth tensor spline model representing the labeled regions. Results of interpolation, approximation, and segmentation are presented for synthetic data and real diffusion tensor fields from an isolated rat hippocampus, along with validation. We also present comparisons of our algorithms with existing methods and show significantly improved results in the presence of noise as well as outliers. PMID:18041268

  16. A Logarithmic Complexity Floating Frame of Reference Formulation with Interpolating Splines for Articulated Multi-Flexible-Body Dynamics

    PubMed Central

    Ahn, W.; Anderson, K.S.; De, S.

    2013-01-01

    An interpolating spline-based approach is presented for modeling multi-flexible-body systems in the divide-and-conquer (DCA) scheme. This algorithm uses the floating frame of reference formulation and piecewise spline functions to construct and solve the non-linear equations of motion of the multi-flexible-body system undergoing large rotations and translations. The new approach is compared with the flexible DCA (FDCA) that uses the assumed modes method [1]. The FDCA, in many cases, must resort to sub-structuring to accurately model the deformation of the system. We demonstrate, through numerical examples, that the interpolating spline-based approach is comparable in accuracy and superior in efficiency to the FDCA. The present approach is appropriate for modeling flexible mechanisms with thin 1D bodies undergoing large rotations and translations, including those with irregular shapes. As such, the present approach extends the current capability of the DCA to model deformable systems. The algorithm retains the theoretical logarithmic complexity inherent in the DCA when implemented in parallel. PMID:24124265

  17. Image Quality Improvement in Adaptive Optics Scanning Laser Ophthalmoscopy Assisted Capillary Visualization Using B-spline-based Elastic Image Registration

    PubMed Central

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796

  18. Collocation, Semantic Prosody, and Near Synonymy: A Cross-Linguistic Perspective

    ERIC Educational Resources Information Center

    Xiao, Richard; McEnery, Tony

    2006-01-01

    This paper explores the collocational behaviour and semantic prosody of near synonyms from a cross-linguistic perspective. The importance of these concepts to language learning is well recognized. Yet while collocation and semantic prosody have recently attracted much interest from researchers studying the English language, there has been little…

  19. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic

  20. Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2011-01-01

    This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the

  1. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    ERIC Educational Resources Information Center

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  2. Investigating the Viability of a Collocation List for Students of English for Academic Purposes

    ERIC Educational Resources Information Center

    Durrant, Philip

    2009-01-01

    A number of researchers are currently attempting to create listings of important collocations for students of EAP. However, so far these attempts have (1) failed to include positionally-variable collocations, and (2) not taken sufficient account of variation across disciplines. The present paper describes the creation of one listing of…

  3. Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2011-01-01

    This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the…

  4. Study on the Causes and Countermeasures of the Lexical Collocation Mistakes in College English

    ERIC Educational Resources Information Center

    Yan, Hansheng

    2010-01-01

    The lexical collocation in English is an important content in the linguistics theory, and also a research topic which is more and more emphasized in English teaching practice of China. The collocation ability of English decides whether learners could masterly use real English in effective communication. In many years' English teaching practice,…

  5. Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing

    ERIC Educational Resources Information Center

    Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh

    2015-01-01

    Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of…

  6. On the Effect of Gender and Years of Instruction on Iranian EFL Learners' Collocational Competence

    ERIC Educational Resources Information Center

    Ganji, Mansoor

    2012-01-01

    This study investigates the Iranian EFL learners' Knowledge of Lexical Collocation at three academic levels: freshmen, sophomores, and juniors. The participants were forty three English majors doing their B.A. in English Translation studies in Chabahar Maritime University. They took a 50-item fill-in-the-blank test of lexical collocations. The…

  7. Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations

    ERIC Educational Resources Information Center

    Liu, Dilin

    2010-01-01

    Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…

  8. Towards a Learner Need-Oriented Second Language Collocation Writing Assistant

    ERIC Educational Resources Information Center

    Ramos, Margarita Alonso; Carlini, Roberto; Codina-Filbà, Joan; Orol, Ana; Vincze, Orsolya; Wanner, Leo

    2015-01-01

    The importance of collocations, i.e. idiosyncratic binary word co-occurrences in the context of second language learning has been repeatedly emphasized by scholars working in the field. Some went even so far as to argue that "vocabulary learning is collocation learning" (Hausmann, 1984, p. 395). Empirical studies confirm this…

  9. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…

  10. Cross-Linguistic Influence: Its Impact on L2 English Collocation Production

    ERIC Educational Resources Information Center

    Phoocharoensil, Supakorn

    2013-01-01

    This research study investigated the influence of learners' mother tongue on their acquisition of English collocations. Having drawn the linguistic data from two groups of Thai EFL learners differing in English proficiency level, the researcher found that the native language (L1) plays a significant role in the participants' collocation learning…

  11. Numerical solution of the time-independent Dirac equation for diatomic molecules: B splines without spurious states

    NASA Astrophysics Data System (ADS)

    Fillion-Gourdeau, François; Lorin, Emmanuel; Bandrauk, André D.

    2012-02-01

    Two numerical methods are used to evaluate the relativistic spectrum of the two-center Coulomb problem (for the H2+ and Th2179+ diatomic molecules) in the fixed nuclei approximation by solving the single-particle time-independent Dirac equation. The first one is based on a min-max principle and uses a two-spinor formulation as a starting point. The second one is the Rayleigh-Ritz variational method combined with kinematically balanced basis functions. Both methods use a B-spline basis function expansion. We show that accurate results can be obtained with both methods and that no spurious states appear in the discretization process.

  12. Use of tensor product splines in magnet optimization

    SciTech Connect

    Davey, K.R. )

    1999-05-01

    Variational Metrics and other direct search techniques have proved useful in magnetic optimization. At least one technique used in magnetic optimization is to first fit the data of the desired optimization parameter to the data. If this fit is smoothly differentiable, a number of powerful techniques become available for the optimization. The author shows the usefulness of tensor product splines in accomplishing this end. Proper choice of augmented knot placement not only makes the fit very accurate, but allows for differentiation. Thus the gradients required with direct optimization in divariate and trivariate applications are robustly generated.

  13. Geometric construction of quintic parametric B-splines

    NASA Astrophysics Data System (ADS)

    Cravero, Isabella; Manni, Carla; Sampoli, M. Lucia

    2008-11-01

    The aim of this paper is to present a new class of B-spline-like functions with tension properties. The main feature of these basis functions consists in possessing C3 or even C4 continuity and, at the same time, being endowed by shape parameters that can be easily handled. Therefore they constitute a useful tool for the construction of curves satisfying some prescribed shape constraints. The construction is based on a geometric approach which uses parametric curves with piecewise quintic components.

  14. A comparison of boundary and global collocation solutions for K(I) and CMOD calibration functions

    SciTech Connect

    Sanford, R.J.; Kirk, M.T. U.S. Navy, David W. Taylor Naval Ship Research and Development Center, Annapolis, MD )

    1991-03-01

    Global and boundary collocation solutions for K(I), CMOD, and the full-field stress patterns of a single-edge notched tension specimen were compared to determine the accuracy of each technique and the utility of each for determining solutions for the short and the deep crack case. It was demonstrated that inclusion of internal stress conditions in the collocation, i.e., performing a global rather than a boundary collocation solution, expands the range of crack lengths over which accurate results can be obtained. In particular, the global collocation approach provided accurate results for crack lengths between 10 percent and 80 percent of the specimen width for a typical specimen geometry. Comparable accuracy for boundary collocation was found only for crack lengths between 20 percent and 60 percent of the specimen width. 27 refs.

  15. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  16. Full-turn symplectic map from a generator in a Fourier-spline basis

    SciTech Connect

    Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.

    1993-04-01

    Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton`s method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10{sup 7} turns on an IBM RS6000 model 320 workstation, in a run of about one day.

  17. Estimating error cross-correlations in soil moisture data sets using extended collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Crow, W. T.; Zwieback, S.; Dorigo, W. A.; Wagner, W.

    2016-02-01

    Global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multisource soil moisture retrievals into a unified data set. However, this requires an appropriate parameterization of the error structures of the underlying data sets. While triple collocation (TC) analysis has been widely recognized as a powerful tool for estimating random error variances of coarse-resolution soil moisture data sets, the estimation of error cross covariances remains an unresolved challenge. Here we propose a method—referred to as extended collocation (EC) analysis—for estimating error cross-correlations by generalizing the TC method to an arbitrary number of data sets and relaxing the therein made assumption of zero error cross-correlation for certain data set combinations. A synthetic experiment shows that EC analysis is able to reliably recover true error cross-correlation levels. Applied to real soil moisture retrievals from Advanced Microwave Scanning Radiometer-EOS (AMSR-E) C-band and X-band observations together with advanced scatterometer (ASCAT) retrievals, modeled data from Global Land Data Assimilation System (GLDAS)-Noah and in situ measurements drawn from the International Soil Moisture Network, EC yields reasonable and strong nonzero error cross-correlations between the two AMSR-E products. Against expectation, nonzero error cross-correlations are also found between ASCAT and AMSR-E. We conclude that the proposed EC method represents an important step toward a fully parameterized error covariance matrix for coarse-resolution soil moisture data sets, which is vital for any rigorous data assimilation framework or data merging scheme.

  18. The determination of gravity anomalies from geoid heights using the inverse Stokes' formula, Fourier transforms, and least squares collocation

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Sjoeberg, L.; Rapp, R. H.

    1978-01-01

    A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.

  19. Application of the operator spline technique to nonlinear estimation and control of moving elastic systems

    NASA Technical Reports Server (NTRS)

    Karray, Fakhreddine; Dwyer, Thomas A. W., III

    1990-01-01

    A bilinear model of the vibrational dynamics of a deformable maneuvering body is described. Estimates of the deformation state are generated through a low dimensional operator spline interpolator of bilinear systems combined with a feedback linearized based observer. Upper bounds on error estimates are also generated through the operator spline, and potential application to shaping control purposes is highlighted.

  20. Robustness properties of LQG optimized compensators for collocated rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    In this paper we study the robustness with respect to stability of the closed-loop system with collocated rate sensor using LQG (mean square rate) optimized compensators. Our main result is that the transmission zeros of the compensator are precisely the structure modes when the actuator/sensor locations are 'pinned' and/or 'clamped': i.e., motion in the direction sensed is not allowed. We have stability even under parameter mismatch, except in the unlikely situation where such a mode frequency of the assumed system coincides with an undamped mode frequency of the real system and the corresponding mode shape is an eigenvector of the compensator transfer function matrix at that frequency. For a truncated modal model - such as that of the NASA LaRC Phase Zero Evolutionary model - the transmission zeros of the corresponding compensator transfer function can be interpreted as the structure modes when motion in the directions sensed is prohibited.

  1. Generation of global VTEC maps from low latency GNSS observations based on B-spline modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte

    2015-04-01

    In May 2014 DGFI-TUM (the former DGFI) and the German Space Situational Awareness Centre (GSSAC) started to develop an OPerational Tool for Ionospheric Mapping And Prediction (OPTIMAP); since November 2014 the Institute of Astrophysics at the University of Göttingen (IAG) joined the group as the third partner. This project aims on the computation and prediction of maps of the vertical total electron content (VTEC) and the electron density distribution of the ionosphere on a global scale from both various space-geodetic observation techniques such as GNSS and satellite altimetry as well as Sun observations. In this contribution we present first results, i.e. a near-real time processing framework for generating VTEC maps by assimilating GNSS (GPS, GLONASS) based ionospheric data into a two-dimensional global B-spline approach. To be more specific, the spatial variations of VTEC are modelled by trigonometric B-spline functions in longitude and by endpoint-interpolating polynomial B-spline functions in latitude, respectively. Since B-spline functions are compactly supported and highly localizing our approach can handle large data gaps appropriately and, thus, provides a better approximation of data with heterogeneous density and quality compared to the commonly used spherical harmonics. The presented method models temporal variations of VTEC inside a Kalman filter. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. To approximate the temporal variation of these state vector components as part of the filter the dynamical model has to be set up. The current implementation of the filter allows to select between a random walk process, a Gauss-Markov process and a dynamic process driven by an empirical ionosphere model, e.g. the International Reference Ionosphere (IRI). For running the model ionospheric input data is acquired from terrestrial GNSS networks through online archive systems (such as IGS) with approximately one hour latency. Before feeding the filter with new hourly data, the raw GNSS observations are downloaded and pre-processed via geometry free linear combinations to provide signal delay information including the ionospheric effects and the differential code biases. Next steps will implement further space geodetic techniques and will introduce the Sun observations into the procedure. The final destination is to develop a time dependent model of the electron density based on different geodetic and solar observations.

  2. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical results and demonstrate the distinctions between the various stochastic identification objectives.

  3. Variable selection in Bayesian smoothing spline ANOVA models: Application to deterministic computer codes

    PubMed Central

    Reich, Brian J.; Storlie, Curtis B.; Bondell, Howard D.

    2009-01-01

    With many predictors, choosing an appropriate subset of the covariates is a crucial, and difficult, step in nonparametric regression. We propose a Bayesian nonparametric regression model for curve-fitting and variable selection. We use the smoothing spline ANOVA framework to decompose the regression function into interpretable main effect and interaction functions. Stochastic search variable selection via MCMC sampling is used to search for models that fit the data well. Also, we show that variable selection is highly-sensitive to hyperparameter choice and develop a technique to select hyperparameters that control the long-run false positive rate. The method is used to build an emulator for a complex computer model for two-phase fluid flow. PMID:19789732

  4. Using spline models to estimate the varying health risks from air pollution across Scotland.

    PubMed

    Lee, Duncan

    2012-11-30

    The health risks associated with long-term exposure to air pollution are often estimated from small-area data by regressing the numbers of disease cases in each small area against air pollution concentrations and other covariates. The majority of studies in this field only estimate a single health risk for the entire region, whereas in fact the risks in each small area may vary because of differences in the exposure level and the extent to which the population are vulnerable to disease. This paper proposes a natural cubic spline model for estimating these varying health risks, which allows the risks to depend (potentially) non-linearly on additional risk factors. The methods are implemented within a Bayesian setting, with inference based on Markov chain Monte Carlo simulation. The approach is illustrated by presenting a study based in Scotland, which investigates the relationship between PM (10) concentrations and respiratory related hospital admissions. PMID:22736479

  5. A numerical investigation of the GRLW equation using lumped Galerkin approach with cubic B-spline.

    PubMed

    Zeybek, Halil; Karakoç, S Battal Gazi

    2016-01-01

    In this work, we construct the lumped Galerkin approach based on cubic B-splines to obtain the numerical solution of the generalized regularized long wave equation. Applying the von Neumann approximation, it is shown that the linearized algorithm is unconditionally stable. The presented method is implemented to three test problems including single solitary wave, interaction of two solitary waves and development of an undular bore. To prove the performance of the numerical scheme, the error norms [Formula: see text] and [Formula: see text] and the conservative quantities [Formula: see text], [Formula: see text] and [Formula: see text] are computed and the computational data are compared with the earlier works. In addition, the motion of solitary waves is described at different time levels. PMID:27026895

  6. The construction of operational matrix of fractional derivatives using B-spline functions

    NASA Astrophysics Data System (ADS)

    Lakestani, Mehrdad; Dehghan, Mehdi; Irandoust-pakchin, Safar

    2012-03-01

    Fractional calculus has been used to model physical and engineering processes that are found to be best described by fractional differential equations. For that reason we need a reliable and efficient technique for the solution of fractional differential equations. Here we construct the operational matrix of fractional derivative of order α in the Caputo sense using the linear B-spline functions. The main characteristic behind the approach using this technique is that it reduces such problems to those of solving a system of algebraic equations thus we can solve directly the problem. The method is applied to solve two types of fractional differential equations, linear and nonlinear. Illustrative examples are included to demonstrate the validity and applicability of the new technique presented in the current paper.

  7. Using a radial ultrasound probe's virtual origin to compute midsagittal smoothing splines in polar coordinates.

    PubMed

    Heyne, Matthias; Derrick, Donald

    2015-12-01

    Tongue surface measurements from midsagittal ultrasound scans are effectively arcs with deviations representing tongue shape, but smoothing-spline analysis of variances (SSANOVAs) assume variance around a horizontal line. Therefore, calculating SSANOVA average curves of tongue traces in Cartesian Coordinates [Davidson, J. Acoust. Soc. Am. 120(1), 407-415 (2006)] creates errors that are compounded at tongue tip and root where average tongue shape deviates most from a horizontal line. This paper introduces a method for transforming data into polar coordinates similar to the technique by Mielke [J. Acoust. Soc. Am. 137(5), 2858-2869 (2015)], but using the virtual origin of a radial ultrasound transducer as the polar origin-allowing data conversion in a manner that is robust against between-subject and between-session variability. PMID:26723359

  8. Local Convexity-Preserving C2 Rational Cubic Spline for Convex Data

    PubMed Central

    Abd Majid, Ahmad; Ali, Jamaludin Md.

    2014-01-01

    We present the smooth and visually pleasant display of 2D data when it is convex, which is contribution towards the improvements over existing methods. This improvement can be used to get the more accurate results. An attempt has been made in order to develop the local convexity-preserving interpolant for convex data using C2 rational cubic spline. It involves three families of shape parameters in its representation. Data dependent sufficient constraints are imposed on single shape parameter to conserve the inherited shape feature of data. Remaining two of these shape parameters are used for the modification of convex curve to get a visually pleasing curve according to industrial demand. The scheme is tested through several numerical examples, showing that the scheme is local, computationally economical, and visually pleasing. PMID:24757421

  9. TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)

    EPA Science Inventory

    A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...

  10. Collocated comparisons of continuous and filter-based PM2.5 measurements at Fort McMurray, Alberta, Canada

    PubMed Central

    Hsu, Yu-Mei; Wang, Xiaoliang; Chow, Judith C.; Watson, John G.; Percy, Kevin E.

    2016-01-01

    ABSTRACT Collocated comparisons for three PM2.5 monitors were conducted from June 2011 to May 2013 at an air monitoring station in the residential area of Fort McMurray, Alberta, Canada, a city located in the Athabasca Oil Sands Region. Extremely cold winters (down to approximately −40°C) coupled with low PM2.5 concentrations present a challenge for continuous measurements. Both the tapered element oscillating microbalance (TEOM), operated at 40°C (i.e., TEOM40), and Synchronized Hybrid Ambient Real-time Particulate (SHARP, a Federal Equivalent Method [FEM]), were compared with a Partisol PM2.5 U.S. Federal Reference Method (FRM) sampler. While hourly TEOM40 PM2.5 were consistently ~20–50% lower than that of SHARP, no statistically significant differences were found between the 24-hr averages for FRM and SHARP. Orthogonal regression (OR) equations derived from FRM and TEOM40 were used to adjust the TEOM40 (i.e., TEOMadj) and improve its agreement with FRM, particularly for the cold season. The 12-year-long hourly TEOMadj measurements from 1999 to 2011 based on the OR equations between SHARP and TEOM40 were derived from the 2-year (2011–2013) collocated measurements. The trend analysis combining both TEOMadj and SHARP measurements showed a statistically significant decrease in PM2.5 concentrations with a seasonal slope of −0.15 μg m−3 yr−1 from 1999 to 2014.Implications: Consistency in PM2.5 measurements are needed for trend analysis. Collocated comparison among the three PM2.5 monitors demonstrated the difference between FRM and TEOM, as well as between SHARP and TEOM. The orthogonal regressions equations can be applied to correct historical TEOM data to examine long-term trends within the network. PMID:26727574

  11. COLLINARUS: collection of image-derived non-linear attributes for registration using splines

    NASA Astrophysics Data System (ADS)

    Chappelow, Jonathan; Bloch, B. Nicolas; Rofsky, Neil; Genega, Elizabeth; Lenkinski, Robert; DeWolf, William; Viswanath, Satish; Madabhushi, Anant

    2009-02-01

    We present a new method for fully automatic non-rigid registration of multimodal imagery, including structural and functional data, that utilizes multiple texutral feature images to drive an automated spline based non-linear image registration procedure. Multimodal image registration is significantly more complicated than registration of images from the same modality or protocol on account of difficulty in quantifying similarity between different structural and functional information, and also due to possible physical deformations resulting from the data acquisition process. The COFEMI technique for feature ensemble selection and combination has been previously demonstrated to improve rigid registration performance over intensity-based MI for images of dissimilar modalities with visible intensity artifacts. Hence, we present here the natural extension of feature ensembles for driving automated non-rigid image registration in our new technique termed Collection of Image-derived Non-linear Attributes for Registration Using Splines (COLLINARUS). Qualitative and quantitative evaluation of the COLLINARUS scheme is performed on several sets of real multimodal prostate images and synthetic multiprotocol brain images. Multimodal (histology and MRI) prostate image registration is performed for 6 clinical data sets comprising a total of 21 groups of in vivo structural (T2-w) MRI, functional dynamic contrast enhanced (DCE) MRI, and ex vivo WMH images with cancer present. Our method determines a non-linear transformation to align WMH with the high resolution in vivo T2-w MRI, followed by mapping of the histopathologic cancer extent onto the T2-w MRI. The cancer extent is then mapped from T2-w MRI onto DCE-MRI using the combined non-rigid and affine transformations determined by the registration. Evaluation of prostate registration is performed by comparison with the 3 time point (3TP) representation of functional DCE data, which provides an independent estimate of cancer extent. The set of synthetic multiprotocol images, acquired from the BrainWeb Simulated Brain Database, comprises 11 pairs of T1-w and proton density (PD) MRI of the brain. Following the application of a known warping to misalign the images, non-rigid registration was then performed to recover the original, correct alignment of each image pair. Quantitative evaluation of brain registration was performed by direct comparison of (1) the recovered deformation field to the applied field and (2) the original undeformed and recovered PD MRI. For each of the data sets, COLLINARUS is compared with the MI-driven counterpart of the B-spline technique. In each of the quantitative experiments, registration accuracy was found to be significantly (p < 0.05) for COLLINARUS compared with MI-driven B-spline registration. Over 11 slices, the mean absolute error in the deformation field recovered by COLLINARUS was found to be 0.8830 mm.

  12. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGESBeta

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  13. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  14. Using Spline Functions for the Shape Description of the Surface of Shell Structures

    NASA Astrophysics Data System (ADS)

    Lenda, Grzegorz

    2014-12-01

    The assessment of the cover shape of shell structures makes an important issue both from the point of view of safety, as well as functionality of the construction. The most numerous group among this type of constructions are objects having the shape of a quadric (cooling towers, tanks with gas and liquids, radio-telescope dishes etc.). The material from observation of these objects (point sets), collected during periodic measurements is usually converted into a continuous form in the process of approximation, with the use of the quadric surface. The created models, are then applied in the assessment of the deformation of surface in the given period of time. Such a procedure has, however, some significant limitations. The approximation with the use of quadrics, allows the determination of basic dimensions and location of the construction, however it results in ideal objects, not providing any information on local surface deformations. They can only be defined by comparison of the model with the point set of observations. If the periodic measurements are carried out in independent, separate points, then it will be impossible to define the existing deformations directly. The second problem results from the one-equation character of the ideal approximation model. Real deformations of the object change its basic parameters, inter alia the lengths of half-axis of main quadrics. The third problem appears when the construction is not a quadric; no information on the equation describing its shape is available either. Accepting wrong kind of approximation function, causes the creation of a model of large deviations from the observed points. All the mentioned above inconveniences can be avoided by applying splines to the shape description of the surface of shell structures. The use of the function of this type, however, comes across other types of limitations. This study deals with the above subject, presenting several methods allowing the increase of accuracy and decrease of the time of the modelling with the splines.

  15. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  16. B-spline explicit active surfaces: an efficient framework for real-time 3-D region-based segmentation.

    PubMed

    Barbosa, Daniel; Dietenbeck, Thomas; Schaerer, Joel; D'hooge, Jan; Friboulet, Denis; Bernard, Olivier

    2012-01-01

    A new formulation of active contours based on explicit functions has been recently suggested. This novel framework allows real-time 3-D segmentation since it reduces the dimensionality of the segmentation problem. In this paper, we propose a B-spline formulation of this approach, which further improves the computational efficiency of the algorithm. We also show that this framework allows evolving the active contour using local region-based terms, thereby overcoming the limitations of the original method while preserving computational speed. The feasibility of real-time 3-D segmentation is demonstrated using simulated and medical data such as liver computer tomography and cardiac ultrasound images. PMID:22186712

  17. Variance-based global sensitivity analysis for multiple scenarios and models with implementation using sparse grid collocation

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Ye, Ming

    2015-09-01

    Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.

  18. Cubic spline function interpolation in atmosphere models for the software development laboratory: Formulation and data

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, J. C.

    1976-01-01

    A tabulation of selected altitude-correlated values of pressure, density, speed of sound, and coefficient of viscosity for each of six models of the atmosphere is presented in block data format. Interpolation for the desired atmospheric parameters is performed by using cubic spline functions. The recursive relations necessary to compute the cubic spline function coefficients are derived and implemented in subroutine form. Three companion subprograms, which form the preprocessor and processor, are also presented. These subprograms, together with the data element, compose the spline fit atmosphere package. Detailed FLOWGM flow charts and FORTRAN listings of the atmosphere package are presented in the appendix.

  19. Experimental procedure for the evaluation of tooth stiffness in spline coupling including angular misalignment

    NASA Astrophysics Data System (ADS)

    Curà, Francesca; Mura, Andrea

    2013-11-01

    Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.

  20. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  1. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations.

    PubMed

    Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short. PMID:27004867

  2. Collocated Dataglyphs for large-message storage and retrieval

    NASA Astrophysics Data System (ADS)

    Motwani, Rakhi C.; Breidenbach, Jeff A.; Black, John R.

    2004-06-01

    In contrast to the security and integrity of electronic files, printed documents are vulnerable to damage and forgery due to their physical nature. Researchers at Palo Alto Research Center utilize DataGlyph technology to render digital characteristics to printed documents, which provides them with the facility of tamper-proof authentication and damage resistance. This DataGlyph document is known as GlyphSeal. Limited DataGlyph carrying capacity per printed page restricted the application of this technology to a domain of graphically simple and small-sized single-paged documents. In this paper the authors design a protocol motivated by techniques from the networking domain and back-up strategies, which extends the GlyphSeal technology to larger-sized, graphically complex, multi-page documents. This protocol provides fragmentation, sequencing and data loss recovery. The Collocated DataGlyph Protocol renders large glyph messages onto multiple printed pages and recovers the glyph data from rescanned versions of the multi-page documents, even when pages are missing, reordered or damaged. The novelty of this protocol is the application of ideas from RAID to the domain of DataGlyphs. The current revision of this protocol is capable of generating at most 255 pages, if page recovery is desired and does not provide enough data density to store highly detailed images in a reasonable amount of page space.

  3. Assessment of adequate quality and collocation of reference measurements with space-borne hyperspectral infrared instruments to validate retrievals of temperature and water vapour

    NASA Astrophysics Data System (ADS)

    Calbet, X.

    2016-01-01

    A method is presented to assess whether a given reference ground-based point observation, typically a radiosonde measurement, is adequately collocated and sufficiently representative of space-borne hyperspectral infrared instrument measurements. Once this assessment is made, the ground-based data can be used to validate and potentially calibrate, with a high degree of accuracy, the hyperspectral retrievals of temperature and water vapour.

  4. Estimates of Mode-S EHS aircraft derived wind observation errors using triple collocation

    NASA Astrophysics Data System (ADS)

    de Haan, S.

    2015-12-01

    Information on the accuracy of meteorological observation is essential to assess the applicability of the measurement. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from two meter temperature observation to satellite radiances. The drawback is that these comparisons contain also the (unknown) model error. By applying the so-called triple collocation method (Stoffelen, 1998), on two independent observation at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained using Mode-S EHS (de Haan, 2011). Radial wind measurements from Doppler weather Radar and wind vector measurements from Sodar, together with equivalents from a non-hydrostatic numerical weather prediction model are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.

  5. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    NASA Astrophysics Data System (ADS)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  6. Multivariate multilevel spline models for parallel growth processes: application to weight and mean arterial pressure in pregnancy.

    PubMed

    Macdonald-Wallis, Corrie; Lawlor, Debbie A; Palmer, Tom; Tilling, Kate

    2012-11-20

    Growth models are commonly used in life course epidemiology to describe growth trajectories and their determinants or to relate particular patterns of change to later health outcomes. However, methods to analyse relationships between two or more change processes occurring in parallel, in particular to assess evidence for causal influences of change in one variable on subsequent changes in another, are less developed. We discuss linear spline multilevel models with a multivariate response and show how these can be used to relate rates of change in a particular time period in one variable to later rates of change in another variable by using the variances and covariances of individual-level random effects for each of the splines. We describe how regression coefficients can be calculated for these associations and how these can be adjusted for other parameters such as random effect variables relating to baseline values or rates of change in earlier time periods, and compare different methods for calculating the standard errors of these regression coefficients. We also show that these models can equivalently be fitted in the structural equation modelling framework and apply each method to weight and mean arterial pressure changes during pregnancy, obtaining similar results for multilevel and structural equation models. This method improves on the multivariate linear growth models, which have been used previously to model parallel processes because it enables nonlinear patterns of change to be modelled and the temporal sequence of multivariate changes to be determined, with adjustment for change in earlier time periods. PMID:22733701

  7. Reverse engineering of complex biological body parts by squared distance enabled non-uniform rational B-spline technique and layered manufacturing.

    PubMed

    Pandithevan, Ponnusamy

    2015-02-01

    In tissue engineering, the successful modeling of scaffold for the replacement of damaged body parts depends mainly on external geometry and internal architecture in order to avoid the adverse effects such as pain and lack of ability to transfer the load to the surrounding bone. Due to flexibility in controlling the parameters, layered manufacturing processes are widely used for the fabrication of bone tissue engineering scaffold with the given computer-aided design model. This article presents a squared distance minimization approach for weight optimization of non-uniform rational B-spline curve and surface to modify the geometry that exactly fits into the defect region automatically and thus to fabricate the scaffold specific to subject and site. The study showed that though the errors associated in the B-spline curve and surface were minimized by squared distance method than point distance method and tangent distance method, the errors could be minimized further in the rational B-spline curve and surface as the optimal weight could change the shape that desired for the defect site. In order to measure the efficacy of the present approach, the results were compared with point distance method and tangent distance method in optimizing the non-rational and rational B-spline curve and surface fitting for the defect site. The optimized geometry then allowed to construct the scaffold in fused deposition modeling system as an example. The result revealed that the squared distance-based weight optimization of the rational curve and surface in making the defect specific geometry best fits into the defect region than the other methods used. PMID:25767151

  8. Prediction of protein structural class using novel evolutionary collocation-based sequence representation.

    PubMed

    Chen, Ke; Kurgan, Lukasz A; Ruan, Jishou

    2008-07-30

    Knowledge of structural classes is useful in understanding of folding patterns in proteins. Although existing structural class prediction methods applied virtually all state-of-the-art classifiers, many of them use a relatively simple protein sequence representation that often includes amino acid (AA) composition. To this end, we propose a novel sequence representation that incorporates evolutionary information encoded using PSI-BLAST profile-based collocation of AA pairs. We used six benchmark datasets and five representative classifiers to quantify and compare the quality of the structural class prediction with the proposed representation. The best, classifier support vector machine achieved 61-96% accuracy on the six datasets. These predictions were comprehensively compared with a wide range of recently proposed methods for prediction of structural classes. Our comprehensive comparison shows superiority of the proposed representation, which results in error rate reductions that range between 14% and 26% when compared with predictions of the best-performing, previously published classifiers on the considered datasets. The study also shows that, for the benchmark dataset that includes sequences characterized by low identity (i.e., 25%, 30%, and 40%), the prediction accuracies are 20-35% lower than for the other three datasets that include sequences with a higher degree of similarity. In conclusion, the proposed representation is shown to substantially improve the accuracy of the structural class prediction. A web server that implements the presented prediction method is freely available at http://biomine.ece.ualberta.ca/Structural_Class/SCEC.html. PMID:18293306

  9. Optimal aeroassisted orbital transfer with plane change using collocation and nonlinear programming

    NASA Technical Reports Server (NTRS)

    Shi, Yun. Y.; Nelson, R. L.; Young, D. H.

    1990-01-01

    The fuel optimal control problem arising in the non-planar orbital transfer employing aeroassisted technology is addressed. The mission involves the transfer from high energy orbit (HEO) to low energy orbit (LEO) with orbital plane change. The basic strategy here is to employ a combination of propulsive maneuvers in space and aerodynamic maneuvers in the atmosphere. The basic sequence of events for the aeroassisted HEO to LEO transfer consists of three phases. In the first phase, the orbital transfer begins with a deorbit impulse at HEO which injects the vehicle into an elliptic transfer orbit with perigee inside the atmosphere. In the second phase, the vehicle is optimally controlled by lift and bank angle modulations to perform the desired orbital plane change and to satisfy heating constraints. Because of the energy loss during the turn, an impulse is required to initiate the third phase to boost the vehicle back to the desired LEO orbital altitude. The third impulse is then used to circularize the orbit at LEO. The problem is solved by a direct optimization technique which uses piecewise polynomial representation for the state and control variables and collocation to satisfy the differential equations. This technique converts the optimal control problem into a nonlinear programming problem which is solved numerically. Solutions were obtained for cases with and without heat constraints and for cases of different orbital inclination changes. The method appears to be more powerful and robust than other optimization methods. In addition, the method can handle complex dynamical constraints.

  10. Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.

  11. Trajectory Tracking Control of Mobile Robot by Time Based Spline Approach

    NASA Astrophysics Data System (ADS)

    Miyata, Junichi; Murakami, Toshiyuki; Ohnishi, Kouhei

    The mobile robot must move without unacceptable rapid motion. To address this issue, in this paper, a preview controller with time based spline approach is proposed.Using the time based spline approach, it is also important to plan the adequate trajectory.Here an approach to trajectory planning which has the trajectory determination strategy by virtual manipulator is proposed.Numerical and experimental results are shown to confirm the proposed algorithm.

  12. A smoothing spline that approximates Laplace transform functions only known on measurements on the real axis

    NASA Astrophysics Data System (ADS)

    D'Amore, L.; Campagna, R.; Galletti, A.; Marcellino, L.; Murli, A.

    2012-02-01

    The scientific and application-oriented interest in the Laplace transform and its inversion is testified by more than 1000 publications in the last century. Most of the inversion algorithms available in the literature assume that the Laplace transform function is available everywhere. Unfortunately, such an assumption is not fulfilled in the applications of the Laplace transform. Very often, one only has a finite set of data and one wants to recover an estimate of the inverse Laplace function from that. We propose a fitting model of data. More precisely, given a finite set of measurements on the real axis, arising from an unknown Laplace transform function, we construct a dth degree generalized polynomial smoothing spline, where d = 2m - 1, such that internally to the data interval it is a dth degree polynomial complete smoothing spline minimizing a regularization functional, and outside the data interval, it mimics the Laplace transform asymptotic behavior, i.e. it is a rational or an exponential function (the end behavior model), and at the boundaries of the data set it joins with regularity up to order m - 1, with the end behavior model. We analyze in detail the generalized polynomial smoothing spline of degree d = 3. This choice was motivated by the (ill)conditioning of the numerical computation which strongly depends on the degree of the complete spline. We prove existence and uniqueness of this spline. We derive the approximation error and give a priori and computable bounds of it on the whole real axis. In such a way, the generalized polynomial smoothing spline may be used in any real inversion algorithm to compute an approximation of the inverse Laplace function. Experimental results concerning Laplace transform approximation, numerical inversion of the generalized polynomial smoothing spline and comparisons with the exponential smoothing spline conclude the work.

  13. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-11-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.

  14. Nonparametric inference in hidden Markov models using P-splines.

    PubMed

    Langrock, Roland; Kneib, Thomas; Sohn, Alexander; DeRuiter, Stacy L

    2015-06-01

    Hidden Markov models (HMMs) are flexible time series models in which the distribution of the observations depends on unobserved serially correlated states. The state-dependent distributions in HMMs are usually taken from some class of parametrically specified distributions. The choice of this class can be difficult, and an unfortunate choice can have serious consequences for example on state estimates, and more generally on the resulting model complexity and interpretation. We demonstrate these practical issues in a real data application concerned with vertical speeds of a diving beaked whale, where we demonstrate that parametric approaches can easily lead to overly complex state processes, impeding meaningful biological inference. In contrast, for the dive data, HMMs with nonparametrically estimated state-dependent distributions are much more parsimonious in terms of the number of states and easier to interpret, while fitting the data equally well. Our nonparametric estimation approach is based on the idea of representing the densities of the state-dependent distributions as linear combinations of a large number of standardized B-spline basis functions, imposing a penalty term on non-smoothness in order to maintain a good balance between goodness-of-fit and smoothness. PMID:25586063

  15. Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.

    PubMed

    Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei

    2014-02-01

    Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods. PMID:26270906

  16. Hierarchical T-splines: Analysis-suitability, Bézier extraction, and application as an adaptive basis for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Evans, E. J.; Scott, M. A.; Li, X.; Thomas, D. C.

    2015-02-01

    In this paper hierarchical analysis-suitable T-splines (HASTS) are developed. The resulting spaces are a superset of both analysis-suitable T-splines and hierarchical B-splines. The additional flexibility provided by the hierarchy of T-spline spaces results in simple, highly localized refinement algorithms which can be utilized in a design or analysis context. A detailed theoretical formulation is presented including a proof of local linear independence for analysis-suitable T-splines, a requisite theoretical ingredient for HASTS. B\\'{e}zier extraction is extended to HASTS simplifying the implementation of HASTS in existing finite element codes. The behavior of a simple HASTS refinement algorithm is compared to the local refinement algorithm for analysis-suitable T-splines demonstrating the superior efficiency and locality of the HASTS algorithm. Finally, HASTS are utilized as a basis for adaptive isogeometric analysis.

  17. Uncertainty Quantification via Random Domain Decomposition and Probabilistic Collocation on Sparse Grids

    SciTech Connect

    Lin, Guang; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2010-09-01

    Due to lack of knowledge or insufficient data, many physical systems are subject to uncertainty. Such uncertainty occurs on a multiplicity of scales. In this study, we conduct the uncertainty analysis of diffusion in random composites with two dominant scales of uncertainty: Large-scale uncertainty in the spatial arrangement of materials and small-scale uncertainty in the parameters within each material. A general two-scale framework that combines random domain decomposition (RDD) and probabilistic collocation method (PCM) on sparse grids to quantify the large and small scales of uncertainty, respectively. Using sparse grid points instead of standard grids based on full tensor products for both the large and small scales of uncertainty can greatly reduce the overall computational cost, especially for random process with small correlation length (large number of random dimensions). For one-dimensional random contact point problem and random inclusion problem, analytical solution and Monte Carlo simulations have been conducted respectively to verify the accuracy of the combined RDD-PCM approach. Additionally, we employed our combined RDD-PCM approach to two- and three-dimensional examples to demonstrate that our combined RDD-PCM approach provides efficient, robust and nonintrusive approximations for the statistics of diffusion in random composites.

  18. Multi-processing least squares collocation: Applications to gravity field analysis

    NASA Astrophysics Data System (ADS)

    Kaas, E.; Sørensen, B.; Tscherning, C. C.; Veicherts, M.

    2013-09-01

    Least Squares Collocation (LSC) is used for the modeling of the gravity field, including prediction and error estimation of various quantities. The method requires that as many unknowns as number of data and parameters are solved for. Cholesky reduction must be used in a nonstandard form due to missing positive-definiteness of the equation system. Furthermore the error estimation produces a rectangular or triangular matrix which must be Cholesky reduced in the non-standard manner. LSC have the possibility to add new sets of data without processing previously reduced parts of the equation system. Due to these factors standard Cholesky reduction programs using multi-processing cannot easily be applied. We has therefore implemented Fortran Open Multi-Processing (OpenMP) to the non-standard Cholesky reduction. In the computation of matrix elements (covariances) as well as the evaluation spherical harmonic series used in the remove/restore setting we also take advantage of multi-processing. We describe the implementation using quadratic blocks, which aids in reducing the data transport overhead. Timing results for different block sizes and number of equations are presented. OpenMP scales favorably so that e.g. the prediction and error estimation of grids from GOCE TRF vertical gradient data and ground gravity data can be done in the less than two hours for a 25° by 25° area with data selected close to 0.125° nodes.

  19. A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation

    NASA Astrophysics Data System (ADS)

    Rieser, Daniel; Mayer-Guerr, Torsten

    2014-05-01

    The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.

  20. Computational Framework for a Fully-Coupled, Collocated-Arrangement Flow Solver Applicable at all Speeds

    NASA Astrophysics Data System (ADS)

    Xiao, Cheng-Nian; Denner, Fabian; van Wachem, Berend

    2015-11-01

    A pressure-based Navier-Stokes solver which is applicable to fluid flow problems of a wide range of speeds is presented. The novel solver is based on collocated variable arrangement and uses a modified Rhie-Chow interpolation method to assure implicit pressure-velocity coupling. A Mach number biased modification to the continuity equation as well as coupling of flow and thermodynamic variables via an energy equation and equation of state enable the simulation of compressible flows belonging to transonic or supersonic Mach number regimes. The flow equation systems are all solved simultaneously, thus guaranteeing strong coupling between pressure and velocity at each iteration step. Shock-capturing is accomplished via nonlinear spatial discretisation schemes which adaptively apply an appropriate blending of first-order upwind and second-order central schemes depending on the local smoothness of the flow field. A selection of standard test problems will be presented to demonstrate the solver's capability of handling incompressible as well as compressible flow fields of vastly different speed regimes on structured as well as unstructured meshes. The authors are grateful for the financial support of Shell.

  1. Vibration suppression in cutting tools using collocated piezoelectric sensors/actuators with an adaptive control algorithm

    SciTech Connect

    Radecki, Peter P; Farinholt, Kevin M; Park, Gyuhae; Bement, Matthew T

    2008-01-01

    The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.

  2. Automatic and accurate reconstruction of distal humerus contours through B-Spline fitting based on control polygon deformation.

    PubMed

    Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A

    2014-12-01

    The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. PMID:25515225

  3. Miniaturized Multi-Band Antenna via Element Collocation

    SciTech Connect

    Martin, R. P.

    2012-06-01

    The resonant frequency of a microstrip patch antenna may be reduced through the addition of slots in the radiating element. Expanding upon this concept in favor of a significant reduction in the tuned width of the radiator, nearly 60% of the antenna metallization is removed, as seen in the top view of the antenna’s radiating element (shown in red, below, left). To facilitate an increase in the gain of the antenna, the radiator is suspended over the ground plane (green) by an air substrate at a height of 0.250″ while being mechanically supported by 0.030″ thick Rogers RO4003 laminate in the same profile as the element. Although the entire surface of the antenna (red) provides 2.45 GHz operation with insignificant negative effects on performance after material removal, the smaller square microstrip in the middle must be isolated from the additional aperture in order to afford higher frequency operation. A low insertion loss path centered at 2.45 GHz may simultaneously provide considerable attenuation at additional frequencies through the implementation of a series-parallel, resonant reactive path. However, an inductive reactance alone will not permit lower frequency energy to propagate across the intended discontinuity. To mitigate this, a capacitance is introduced in series with the inductor, generating a resonance at 2.45 GHz with minimum forward transmission loss. Four of these reactive pairs are placed between the coplanar elements as shown. Therefore, the aperture of the lower-frequency outer segment includes the smaller radiator while the higher frequency section is isolated from the additional material. In order to avoid cross-polarization losses due to the orientation of a transmitter or receiver in reference to the antenna, circular polarization is realized by a quadrature coupler for each collocated antenna as seen in the bottom view of the antenna (right). To generate electromagnetic radiation concentrically rotating about the direction of propagation, ideally one-half of the power must be delivered to the output of each branch with a phase shift of 90 degrees and identical amplitude. Due to this, each arm of the coupler is spaced λ/4 wavelength apart.

  4. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    PubMed

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. PMID:21507749

  5. Full-Relativistic B-Spline R-Matrix Calculations for Electron Collisions with Gold Atoms.

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus; Froese Fischer, Charlotte

    2008-05-01

    We have extended the B-spline R-matrix (close-coupling) method [1] to fully account for relativistic effects in a Dirac-Coulomb formulation. The newly developed computer code has been applied to electron-impact excitation of the (5d^106s)^2S1/2->(5d^106p)^2P1/2,3/2 and (5d^106s)^2S1/2->(5d^96s^2)^2D5/2,3/2 transitions in Au. Our numerical implementation of the close-coupling method enables us to construct term-dependent, non-orthogonal sets of one-electron orbitals for the bound and continuum electrons. This is a critical aspect in the present problem, especially for the 5d and the 6s orbitals. Furthermore, strong core-polarization effects can be accounted for ab initio/ rather than through a semi-empirical and local model potential. Our results will be compared with recent experimental data [2] and predictions from other theoretical approaches [3]. [1] O. Zatsarinny, Comp. Phys. Commun. 174, 273 (2006). [2] M. Maslov, P.J.O. Teubner, and M.J. Brunger, private communication (2008). [3] D.V. Fursa, I. Bray, and R.P. McEachran, private communication (2008).

  6. Virtual volume resection using multi-resolution triangular representation of B-spline surfaces.

    PubMed

    Ruskó, László; Mátéka, Ilona; Kriston, András

    2013-08-01

    Computer assisted analysis of organs has an important role in clinical diagnosis and therapy planning. As well as the visualization, the manipulation of 3-dimensional (3D) objects are key features of medical image processing tools. The goal of this work was to develop an efficient and easy to use tool that allows the physician to partition a segmented organ into its segments or lobes. The proposed tool allows the user to define a cutting surface by drawing some traces on 2D sections of a 3D object, cut the object into two pieces with a smooth surface that fits the input traces, and iterate the process until the object is partitioned at the desired level. The tool is based on an algorithm that interpolates the user-defined traces with B-spline surface and computes a binary cutting volume that represents the different sides of the surface. The computation of the cutting volume is based on the multi-resolution triangulation of the B-spline surface. The proposed algorithm was integrated into an open-source medical image processing framework. Using the tool, the user can select the object to be partitioned (e.g. segmented liver), define the cutting surface based on the corresponding medical image (medical image visualizing the internal structure of the liver), cut the selected object, and iterate the process. In case of liver segment separation, the cuts can be performed according to a predefined sequence, which makes it possible to label the temporary as well as the final partitions (lobes, segments) automatically. The presented tool was evaluated for anatomical segment separation of the liver involving 14 cases and virtual liver tumor resection involving one case. The segment separation was repeated 3 different times by one physician for all cases, and the average and the standard deviation of segment volumes were computed. According to the test experiences the presented algorithm proved to be efficient and user-friendly enough to perform free form cuts for liver segment separation and virtual liver tumor resection. The volume quantification of segments showed good correlation with the prior art and the vessel-based liver segment separation, which demonstrate the clinical usability of the presented method. PMID:23726362

  7. An evaluation of prefiltered B-spline reconstruction for quasi-interpolation on the Body-Centered Cubic lattice.

    PubMed

    Csébfalvi, Balázs

    2010-01-01

    In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively. PMID:20224143

  8. Temporal gravity field modeling based on least square collocation with short-arc approach

    NASA Astrophysics Data System (ADS)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  9. On the anomaly of velocity-pressure decoupling in collocated mesh solutions

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook; Vanoverbeke, Thomas

    1991-01-01

    The use of various pressure correction algorithms originally developed for fully staggered meshes can yield a velocity-pressure decoupled solution for collocated meshes. The mechanism that causes velocity-pressure decoupling is identified. It is shown that the use of a partial differential equation for the incremental pressure eliminates such a mechanism and yields a velocity-pressure coupled solution. Example flows considered are a three dimensional lid-driven cavity flow and a laminar flow through a 90 deg bend square duct. Numerical results obtained using the collocated mesh are in good agreement with the measured data and other numerical results.

  10. Non-parametric regional VTEC modeling with Multivariate Adaptive Regression B-Splines

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslio?lu, Mahmut Onur

    2011-11-01

    In this work Multivariate Adaptive Regression B-Splines (BMARS) is applied to regional spatio-temporal mapping of the Vertical Total Electron Content (VTEC) using ground based Global Positioning System (GPS) observations. BMARS is a non-parametric regression technique that utilizes compactly supported tensor product B-splines as basis functions, which are automatically obtained from the observations. The algorithm uses a scale-by-scale model building strategy that searches for B-splines at each scale fitting adequately to the data and provides smoother approximations than the original Multivariate Adaptive Regression Splines (MARS). It is capable to process high dimensional problems with large amounts of data and can easily be parallelized. The real test data is collected from 32 ground based GPS stations located in North America. The results are compared numerically and visually with both the regional VTEC modeling generated via original MARS using piecewise-linear basis functions and another regional VTEC modeling based on B-splines.

  11. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    SciTech Connect

    Guo, Y.; Keller, J.; Errichello, R.; Halse, C.

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  12. An Automatic Collocation Writing Assistant for Taiwanese EFL Learners: A Case of Corpus-Based NLP Technology

    ERIC Educational Resources Information Center

    Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin

    2008-01-01

    Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…

  13. The Challenge of English Language Collocation Learning in an ES/FL Environment: PRC Students in Singapore

    ERIC Educational Resources Information Center

    Ying, Yang

    2015-01-01

    This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in

  14. A Corpus-Driven Design of a Test for Assessing the ESL Collocational Competence of University Students

    ERIC Educational Resources Information Center

    Jaen, Maria Moreno

    2007-01-01

    This paper reports an assessment of the collocational competence of students of English Linguistics at the University of Granada. This was carried out to meet a two-fold purpose. On the one hand, we aimed to establish a solid corpus-driven approach based upon a systematic and reliable framework for the evaluation of collocational competence in…

  15. The Challenge of English Language Collocation Learning in an ES/FL Environment: PRC Students in Singapore

    ERIC Educational Resources Information Center

    Ying, Yang

    2015-01-01

    This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…

  16. Formulaic Language and Collocations in German Essays: From Corpus-Driven Data to Corpus-Based Materials

    ERIC Educational Resources Information Center

    Krummes, Cedric; Ensslin, Astrid

    2015-01-01

    Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern…

  17. The Relationship between Experiential Learning Styles and the Immediate and Delayed Retention of English Collocations among EFL Learners

    ERIC Educational Resources Information Center

    Mohammadzadeh, Afsaneh

    2012-01-01

    This study was carried out to find out if there was any significant difference in learning English collocations by learning with different dominant experiential learning styles. Seventy-five participants took part in the study in which they were taught a series of English collocations. The entry knowledge of the participants with regard to…

  18. Regional VTEC Modeling over Turkey Using MARS (Multivariate Adaptive Regression Splines)

    NASA Astrophysics Data System (ADS)

    Onur Karslioglu, Mahmut; Durmaz, Murat; Nohutcu, Metin

    2010-05-01

    It is generally known that spherical harmonic representation of the Ionosphere is not suitable for the local and regional applications. Additionally, irregular data and gaps cause also numerical difficulties in the modeling of the ionosphere. We propose an efficient algorithm based on the Multivariate Adaptive Regression Splines (MARS) to represent a new non-parametric modelling for regional spatio-temporal mapping of the ionospheric electron density. MARS can generally process very large data sets of observations and is an adaptive and flexible method, which can be applied to both linear and non-linear problems. The basis functions are derived directly from the observations and have space partitioning property, which results in an adaptive model. This property helps avoid numerical problems and computational inefficiency caused by the number of coefficients, which has to be increased to detect the local variations of the ionosphere. The model complexity can be controlled by the user via limiting the maximal number of coefficients and the order of products of the basis functions. In this study the MARS algorithm is applied to real data sets over Turkey for regional ionosphere modelling.

  19. B-spline R-matrix with pseudostates calculations for electron collisions with atomic nitrogen

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Zatsarinny, Oleg; Bartschat, Klaus

    2014-10-01

    The B-spline R-matrix (BSR) with pseudostates method is employed to treat electron collisions with nitrogen atoms. Predictions for elastic scattering, excitation, and ionization are presented for incident energies between threshold and about 100 eV. The largest scattering model included 690 coupled states, most of which were pseudostates that simulate the effect of the high-lying Rydberg spectrum and, most importantly, the ionization continuum on the results for transitions between the discrete physical states of interest. Similar to our recent work on e-C collisions, this effect is particularly strong at ``intermediate'' incident energies of a few times the ionization threshold. Predictions from a number of collision models will be compared with each other and the very limited information currently available in the literature. Estimates for ionization cross sections will also be provided. This work was supported by the China Scholarship Council (Y.W.) and the United States National Science Foundation under Grants PHY-1068140, PHY-1212450, and the XSEDE allocation PHY-090031 (O.Z. and K.B.).

  20. B-spline R-matrix with pseudostates calculations for electron collisions with atomic nitrogen

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Zatsarinny, Oleg; Bartschat, Klaus

    2014-05-01

    The B-spline R-matrix (BSR) with pseudostates method is employed to treat electron collisions with nitrogen atoms. Predictions for elastic scattering, excitation, and ionization are presented for incident energies between threshold and about 100 eV. The largest scattering model included 690 coupled states, most of which were pseudostates that simulate the effect of the high-lying Rydberg spectrum and, most importantly, the ionization continuum on the results for transitions between the discrete physical states of interest. Similar to our recent work on e-C collisions, this effect is particularly strong at ``intermediate'' incident energies of a few times the ionization threshold. Predictions from a number of collision models will be compared with each other and the very limited information currently available in the literature. Estimates for ionization cross sections will also be provided. This work was supported by the China Scholarship Council (Y.W.) and the United States National Science Foundation under grants PHY-1068140, PHY-1212450, and the XSEDE allocation PHY-090031.

  1. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    NASA Astrophysics Data System (ADS)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  2. Visualization of multidimensional data with collocated paired coordinates and general line coordinates

    NASA Astrophysics Data System (ADS)

    Kovalerchuk, Boris

    2013-12-01

    Often multidimensional data are visualized by splitting n-D data to a set of low dimensional data. While it is useful it destroys integrity of n-D data, and leads to a shallow understanding complex n-D data. To mitigate this challenge a difficult perceptual task of assembling low-dimensional visualized pieces to the whole n-D vectors must be solved. Another way is a lossy dimension reduction by mapping n-D vectors to 2-D vectors (e.g., Principal Component Analysis). Such 2-D vectors carry only a part of information from n-D vectors, without a way to restore n-D vectors exactly from it. An alternative way for deeper understanding of n-D data is visual representations in 2-D that fully preserve n-D data. Methods of Parallel and Radial coordinates are such methods. Developing new methods that preserve dimensions is a long standing and challenging task that we address by proposing Paired Coordinates that is a new type of n-D data visual representation and by generalizing Parallel and Radial coordinates as a General Line coordinates. The important novelty of the concept of the Paired Coordinates is that it uses a single 2-D plot to represent n-D data as an oriented graph based on the idea of collocation of pairs of attributes. The advantage of the General Line Coordinates and Paired Coordinates is in providing a common framework that includes Parallel and Radial coordinates and generating a large number of new visual representations of multidimensional data without lossy dimension reduction.

  3. Penalized splines for smooth representation of high-dimensional Monte Carlo datasets

    NASA Astrophysics Data System (ADS)

    Whitehorn, Nathan; van Santen, Jakob; Lafebre, Sven

    2013-09-01

    Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique (Marx and Eilers, 1996) [1] to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.

  4. Reducing memory demands of splined orbitals in diffusion Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron; Reboredo, Fernando

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. For the example case of bulk MnO we have currently achieved a memory savings of 50% while only increasing the overall computational cost of the simulation by 15%. This work is supported by the Materials Sciences & Engineering Division of the Office of Basic Energy Sciences, U.S. Department of Energy (DOE).

  5. Towards a More General Type of Univariate Constrained Interpolation with Fractal Splines

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Viswanathan, P.; Reddy, K. M.

    2015-09-01

    Recently, in [Electron. Trans. Numer. Anal. 41 (2014) 420-442] authors introduced a new class of rational cubic fractal interpolation functions with linear denominators via fractal perturbation of traditional nonrecursive rational cubic splines and investigated their basic shape preserving properties. The main goal of the current paper is to embark on univariate constrained fractal interpolation that is more general than what was considered so far. To this end, we propose some strategies for selecting the parameters of the rational fractal spline so that the interpolating curves lie strictly above or below a prescribed linear or a quadratic spline function. Approximation property of the proposed rational cubic fractal spine is broached by using the Peano kernel theorem as an interlude. The paper also provides an illustration of background theory, veined by examples.

  6. Material approximation of data smoothing and spline curves inspired by slime mould.

    PubMed

    Jones, Jeff; Adamatzky, Andrew

    2014-09-01

    The giant single-celled slime mould Physarum polycephalum is known to approximate a number of network problems via growth and adaptation of its protoplasmic transport network and can serve as an inspiration towards unconventional, material-based computation. In Physarum, predictable morphological adaptation is prevented by its adhesion to the underlying substrate. We investigate what possible computations could be achieved if these limitations were removed and the organism was free to completely adapt its morphology in response to changing stimuli. Using a particle model of Physarum displaying emergent morphological adaptation behaviour, we demonstrate how a minimal approach to collective material computation may be used to transform and summarise properties of spatially represented datasets. We find that the virtual material relaxes more strongly to high-frequency changes in data, which can be used for the smoothing (or filtering) of data by approximating moving average and low-pass filters in 1D datasets. The relaxation and minimisation properties of the model enable the spatial computation of B-spline curves (approximating splines) in 2D datasets. Both clamped and unclamped spline curves of open and closed shapes can be represented, and the degree of spline curvature corresponds to the relaxation time of the material. The material computation of spline curves also includes novel quasi-mechanical properties, including unwinding of the shape between control points and a preferential adhesion to longer, straighter paths. Interpolating splines could not directly be approximated due to the formation and evolution of Steiner points at narrow vertices, but were approximated after rectilinear pre-processing of the source data. This pre-processing was further simplified by transforming the original data to contain the material inside the polyline. These exemplary results expand the repertoire of spatially represented unconventional computing devices by demonstrating a simple, collective and distributed approach to data and curve smoothing. PMID:24979075

  7. Pattern recognition and lithological interpretation of collocated seismic and magnetotelluric models using self-organizing maps

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Muñoz, G.; Moeck, I.

    2012-05-01

    Joint interpretation of models from seismic tomography and inversion of magnetotelluric (MT) data is an efficient approach to determine the lithology of the subsurface. Statistical methods are well established but were developed for only two types of models so far (seismic P velocity and electrical resistivity). We apply self-organizing maps (SOMs), which have no limitations in the number of parameters considered in the joint interpretation. Our SOM method includes (1) generation of data vectors from the seismic and MT images, (2) unsupervised learning, (3) definition of classes by algorithmic segmentation of the SOM using image processing techniques and (4) application of learned knowledge to classify all data vectors and assign a lithological interpretation for each data vector. We apply the workflow to collocated P velocity, vertical P-velocity gradient and resistivity models derived along a 40 km profile around the geothermal site Groß Schönebeck in the Northeast German Basin. The resulting lithological model consists of eight classes covering Cenozoic, Mesozoic and Palaeozoic sediments down to 5 km depth. There is a remarkable agreement between the litho-type distribution from the SOM analysis and regional marker horizons interpolated from sparse 2-D industrial reflection seismic data. The most interesting features include (1) characteristic properties of the Jurassic (low P-velocity gradients, low resistivity values) interpreted as the signature of shales, and (2) a pattern within the Upper Permian Zechstein layer with low resistivity and increased P-velocity values within the salt depressions and increased resistivity and decreased P velocities in the salt pillows. The latter is explained in our interpretation by flow of less dense salt matrix components to form the pillows while denser and more brittle evaporites such as anhydrite remain in place during the salt mobilization.

  8. Multivariate multilevel spline models for parallel growth processes: application to weight and mean arterial pressure in pregnancy

    PubMed Central

    Macdonald-Wallis, Corrie; Lawlor, Debbie A; Palmer, Tom; Tilling, Kate

    2012-01-01

    Growth models are commonly used in life course epidemiology to describe growth trajectories and their determinants or to relate particular patterns of change to later health outcomes. However, methods to analyse relationships between two or more change processes occurring in parallel, in particular to assess evidence for causal influences of change in one variable on subsequent changes in another, are less developed. We discuss linear spline multilevel models with a multivariate response and show how these can be used to relate rates of change in a particular time period in one variable to later rates of change in another variable by using the variances and covariances of individual-level random effects for each of the splines. We describe how regression coefficients can be calculated for these associations and how these can be adjusted for other parameters such as random effect variables relating to baseline values or rates of change in earlier time periods, and compare different methods for calculating the standard errors of these regression coefficients. We also show that these models can equivalently be fitted in the structural equation modelling framework and apply each method to weight and mean arterial pressure changes during pregnancy, obtaining similar results for multilevel and structural equation models. This method improves on the multivariate linear growth models, which have been used previously to model parallel processes because it enables nonlinear patterns of change to be modelled and the temporal sequence of multivariate changes to be determined, with adjustment for change in earlier time periods. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22733701

  9. Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers

    ERIC Educational Resources Information Center

    Sadeghi, Karim

    2009-01-01

    Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at

  10. Investigation of Native Speaker and Second Language Learner Intuition of Collocation Frequency

    ERIC Educational Resources Information Center

    Siyanova-Chanturia, Anna; Spina, Stefania

    2015-01-01

    Research into frequency intuition has focused primarily on native (L1) and, to a lesser degree, nonnative (L2) speaker intuitions about single word frequency. What remains a largely unexplored area is L1 and L2 intuitions about collocation (i.e., phrasal) frequency. To bridge this gap, the present study aimed to answer the following question: How

  11. Your Participation Is "Greatly/Highly" Appreciated: Amplifier Collocations in L2 English

    ERIC Educational Resources Information Center

    Edmonds, Amanda; Gudmestad, Aarnes

    2014-01-01

    The current study sets out to investigate collocational knowledge for a set of 13 English amplifiers among native and nonnative speakers of English, by providing a partial replication of one of the projects reported on in Granger (1998). The project combines both phraseological and distributional approaches to research into formulaic language to…

  12. Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuator and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.

  13. Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuators and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.

  14. A Comparison of Collocation-Based Similarity Measures in Query Expansion.

    ERIC Educational Resources Information Center

    Kim, Myoung-Cheol; Choi, Key-Sun

    1999-01-01

    Presents a comparison of collocation-based similarity measures for the proper selection of additional search terms in query expansion. Highlights include evaluating retrieval effectiveness in a vector space model, thesauri, document ranking, and experiments on two Korean test collections. (Author/LRW)

  15. Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers

    ERIC Educational Resources Information Center

    Sadeghi, Karim

    2009-01-01

    Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at…

  16. The Effect of Corpus-Based Activities on Verb-Noun Collocations in EFL Classes

    ERIC Educational Resources Information Center

    Ucar, Serpil; Yükselir, Ceyhun

    2015-01-01

    This current study sought to reveal the impacts of corpus-based activities on verb-noun collocation learning in EFL classes. This study was carried out on two groups--experimental and control groups- each of which consists of 15 students. The students were preparatory class students at School of Foreign Languages, Osmaniye Korkut Ata University.…

  17. Verb-Noun Collocations in Second Language Writing: A Corpus Analysis of Learners' English

    ERIC Educational Resources Information Center

    Laufer, Batia; Waldman, Tina

    2011-01-01

    The present study investigates the use of English verb-noun collocations in the writing of native speakers of Hebrew at three proficiency levels. For this purpose, we compiled a learner corpus that consists of about 300,000 words of argumentative and descriptive essays. For comparison purposes, we selected LOCNESS, a corpus of young adult native…

  18. Investigation of Native Speaker and Second Language Learner Intuition of Collocation Frequency

    ERIC Educational Resources Information Center

    Siyanova-Chanturia, Anna; Spina, Stefania

    2015-01-01

    Research into frequency intuition has focused primarily on native (L1) and, to a lesser degree, nonnative (L2) speaker intuitions about single word frequency. What remains a largely unexplored area is L1 and L2 intuitions about collocation (i.e., phrasal) frequency. To bridge this gap, the present study aimed to answer the following question: How…

  19. Strategies in Translating Collocations in Religious Texts from Arabic into English

    ERIC Educational Resources Information Center

    Dweik, Bader S.; Shakra, Mariam M. Abu

    2010-01-01

    The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…

  20. The Role of Language for Thinking and Task Selection in EFL Learners' Oral Collocational Production

    ERIC Educational Resources Information Center

    Wang, Hung-Chun; Shih, Su-Chin

    2011-01-01

    This study investigated how English as a foreign language (EFL) learners' types of language for thinking and types of oral elicitation tasks influence their lexical collocational errors in speech. Data were collected from 42 English majors in Taiwan using two instruments: (1) 3 oral elicitation tasks and (2) an inner speech questionnaire. The…