Science.gov

Sample records for spline collocation method

  1. B-spline Collocation with Domain Decomposition Method

    NASA Astrophysics Data System (ADS)

    Hidayat, M. I. P.; Ariwahjoedi, B.; Parman, S.

    2013-04-01

    A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.

  2. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  3. Parameter estimation technique for boundary value problems by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1988-01-01

    A parameter-estimation technique for boundary-integral equations of the second kind is developed. The output least-squares identification technique using the spline collocation method is considered. The convergence analysis for the numerical method is discussed. The results are applied to boundary parameter estimations for two-dimensional Laplace and Helmholtz equations.

  4. Exponential B-spline collocation method for numerical solution of the generalized regularized long wave equation

    NASA Astrophysics Data System (ADS)

    Reza, Mohammadi

    2015-05-01

    The aim of the present paper is to present a numerical algorithm for the time-dependent generalized regularized long wave equation with boundary conditions. We semi-discretize the continuous problem by means of the Crank-Nicolson finite difference method in the temporal direction and exponential B-spline collocation method in the spatial direction. The method is shown to be unconditionally stable. It is shown that the method is convergent with an order of . Our scheme leads to a tri-diagonal nonlinear system. This new method has lower computational cost in comparison to the Sinc-collocation method. Finally, numerical examples demonstrate the stability and accuracy of this method.

  5. The application of cubic B-spline collocation method in impact force identification

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Chen, Xuefeng; Xue, Xiaofeng; Luo, Xinjie; Liu, Ruonan

    2015-12-01

    The accurate real-time characterization of impact event is vital during the life-time of a mechanical product. However, the identified impact force may seriously diverge from the real one due to the unknown noise contaminating the measured data, as well as the ill-conditioned system matrix. In this paper, a regularized cubic B-spline collocation (CBSC) method is developed for identifying the impact force time history, which overcomes the deficiency of the ill-posed problem. The cubic B-spline function by controlling the mesh size of the collocation point has the profile of a typical impact event. The unknown impact force is approximated by a set of translated cubic B-spline functions and then the original governing equation of force identification is reduced to find the coefficient of the basis function at each collocation point. Moreover, a modified regularization parameter selection criterion derived from the generalized cross validation (GCV) criterion for the truncated singular value decomposition (TSVD) is introduced for the CBSC method to determine the optimum regularization number of cubic B-spline functions. In the numerical simulation of a two degrees-of-freedom (DOF) system, the regularized CBSC method is validated under different noise levels and frequency bands of exciting forces. Finally, an impact experiment is performed on a clamped-free shell structure to confirm the performance of the regularized CBSC method. Experimental results demonstrate that the peak relative errors of impact forces based on the regularized CBSC method are below 8%, while those based on the TSVD method are approximately 30%.

  6. Numerical solution of fractional differential equations using cubic B-spline wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Li, Xinxiu

    2012-10-01

    Physical processes with memory and hereditary properties can be best described by fractional differential equations due to the memory effect of fractional derivatives. For that reason reliable and efficient techniques for the solution of fractional differential equations are needed. Our aim is to generalize the wavelet collocation method to fractional differential equations using cubic B-spline wavelet. Analytical expressions of fractional derivatives in Caputo sense for cubic B-spline functions are presented. The main characteristic of the approach is that it converts such problems into a system of algebraic equations which is suitable for computer programming. It not only simplifies the problem but also speeds up the computation. Numerical results demonstrate the validity and applicability of the method to solve fractional differential equation.

  7. An ADI extrapolated Crank-Nicolson orthogonal spline collocation method for nonlinear reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Fernandes, Ryan I.; Fairweather, Graeme

    2012-08-01

    An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.

  8. Quartic B-spline collocation method applied to Korteweg de Vries equation

    NASA Astrophysics Data System (ADS)

    Zin, Shazalina Mat; Majid, Ahmad Abd; Ismail, Ahmad Izani Md

    2014-07-01

    The Korteweg de Vries (KdV) equation is known as a mathematical model of shallow water waves. The general form of this equation is ut+ɛuux+μuxxx = 0 where u(x,t) describes the elongation of the wave at displacement x and time t. In this work, one-soliton solution for KdV equation has been obtained numerically using quartic B-spline collocation method for displacement x and using finite difference approach for time t. Two problems have been identified to be solved. Approximate solutions and errors for these two test problems were obtained for different values of t. In order to look into accuracy of the method, L2-norm and L∞-norm have been calculated. Mass, energy and momentum of KdV equation have also been calculated. The results obtained show the present method can approximate the solution very well, but as time increases, L2-norm and L∞-norm are also increase.

  9. The basis spline method and associated techniques

    SciTech Connect

    Bottcher, C.; Strayer, M.R.

    1989-01-01

    We outline the Basis Spline and Collocation methods for the solution of Partial Differential Equations. Particular attention is paid to the theory of errors, and the handling of non-self-adjoint problems which are generated by the collocation method. We discuss applications to Poisson's equation, the Dirac equation, and the calculation of bound and continuum states of atomic and nuclear systems. 12 refs., 6 figs.

  10. Numerical solutions of the reaction diffusion system by using exponential cubic B-spline collocation algorithms

    NASA Astrophysics Data System (ADS)

    Ersoy, Ozlem; Dag, Idris

    2015-12-01

    The solutions of the reaction-diffusion system are given by method of collocation based on the exponential B-splines. Thus the reaction-diffusion systemturns into an iterative banded algebraic matrix equation. Solution of the matrix equation is carried out byway of Thomas algorithm. The present methods test on both linear and nonlinear problems. The results are documented to compare with some earlier studies by use of L? and relative error norm for problems respectively.

  11. Progress on the Development of B-spline Collocation for the Solution of Differential Model Equations: A Novel Algorithm for Adaptive Knot Insertion

    SciTech Connect

    Johnson, Richard Wayne

    2003-05-01

    The application of collocation methods using spline basis functions to solve differential model equations has been in use for a few decades. However, the application of spline collocation to the solution of the nonlinear, coupled, partial differential equations (in primitive variables) that define the motion of fluids has only recently received much attention. The issues that affect the effectiveness and accuracy of B-spline collocation for solving differential equations include which points to use for collocation, what degree B-spline to use and what level of continuity to maintain. Success using higher degree B-spline curves having higher continuity at the knots, as opposed to more traditional approaches using orthogonal collocation, have recently been investigated along with collocation at the Greville points for linear (1D) and rectangular (2D) geometries. The development of automatic knot insertion techniques to provide sufficient accuracy for B-spline collocation has been underway. The present article reviews recent progress for the application of B-spline collocation to fluid motion equations as well as new work in developing a novel adaptive knot insertion algorithm for a 1D convection-diffusion model equation.

  12. Spectral collocation methods

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.

    1989-01-01

    This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.

  13. Spectral collocation methods

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.

    1987-01-01

    This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.

  14. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  15. A multilevel stochastic collocation method for SPDEs

    SciTech Connect

    Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton

    2015-03-10

    We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.

  16. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  17. A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics

    SciTech Connect

    Johnson, Richard Wayne; Landon, Mark Dee

    1999-07-01

    The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.

  18. A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics

    SciTech Connect

    M. D. Landon; R. W. Johnson

    1999-07-01

    The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve complex curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.

  19. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  20. Isogeometric methods for computational electromagnetics: B-spline and T-spline discretizations

    NASA Astrophysics Data System (ADS)

    Buffa, A.; Sangalli, G.; Vzquez, R.

    2014-01-01

    In this paper we introduce methods for electromagnetic wave propagation, based on splines and on T-splines. We define spline spaces which form a De Rham complex and following the isogeometric paradigm, we map them on domains which are (piecewise) spline or NURBS geometries. We analyze their geometric and topological structure, as related to the connectivity of the underlying mesh, and we present degrees of freedom together with their physical interpretation. The theory is then extended to the case of meshes with T-junctions, leveraging on the recent theory of T-splines. The use of T-splines enhance our spline methods with local refinement capability and numerical tests show the efficiency and the accuracy of the techniques we propose.

  1. A Semi-Implicit, Fourier-Galerkin/B-Spline Collocation Approach for DNS of Compressible, Reacting, Wall-Bounded Flow

    NASA Astrophysics Data System (ADS)

    Oliver, Todd; Ulerich, Rhys; Topalian, Victor; Malaya, Nick; Moser, Robert

    2013-11-01

    A discretization of the Navier-Stokes equations appropriate for efficient DNS of compressible, reacting, wall-bounded flows is developed and applied. The spatial discretization uses a Fourier-Galerkin/B-spline collocation approach. Because of the algebraic complexity of the constitutive models involved, a flux-based approach is used where the viscous terms are evaluated using repeated application of the first derivative operator. In such an approach, a filter is required to achieve appropriate dissipation at high wavenumbers. We formulate a new filter source operator based on the viscous operator. Temporal discretization is achieved using the SMR91 hybrid implicit/explicit scheme. The linear implicit operator is chosen to eliminate wall-normal acoustics from the CFL constraint while also decoupling the species equations from the remaining flow equations, which minimizes the cost of the required linear algebra. Results will be shown for a mildly supersonic, multispecies boundary layer case inspired by the flow over the ablating surface of a space capsule entering Earth's atmosphere. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  2. Numerical method using cubic B-spline for a strongly coupled reaction-diffusion system.

    PubMed

    Abbas, Muhammad; Majid, Ahmad Abd; Md Ismail, Ahmad Izani; Rashid, Abdur

    2014-01-01

    In this paper, a numerical method for the solution of a strongly coupled reaction-diffusion system, with suitable initial and Neumann boundary conditions, by using cubic B-spline collocation scheme on a uniform grid is presented. The scheme is based on the usual finite difference scheme to discretize the time derivative while cubic B-spline is used as an interpolation function in the space dimension. The scheme is shown to be unconditionally stable using the von Neumann method. The accuracy of the proposed scheme is demonstrated by applying it on a test problem. The performance of this scheme is shown by computing L? and L2 error norms for different time levels. The numerical results are found to be in good agreement with known exact solutions. PMID:24427270

  3. Parallel Adaptive Wavelet Collocation Method for PDEs

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Nejadmalayeri, Alireza; Vezolainen, Alexei

    2011-11-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows to perform wavelet transform and derivative calculations on each processor without additional data synchronization on each level of resolution. The data are stored using tree-like structure with tree roots starting at sufficiently large level of resolution to shorten tree traversing path and to minimize the size of trees for data migration. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quantum of data to be migrated between the processors. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during grid adaptation step and reassigning tree data structure nodes to the appropriate processors to ensure approximately the same number of nodes on each processor. The parallel efficiency of the approach is discussed based on parallel Coherent Vortex Simulations of homogenous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 1024 CPU cores. This work was supported by NSF under grant No. CBET-0756046.

  4. Uncertain eigenvalue analysis by the sparse grid stochastic collocation method

    NASA Astrophysics Data System (ADS)

    Lan, J. C.; Dong, X. J.; Peng, Z. K.; Zhang, W. M.; Meng, G.

    2015-08-01

    In this paper, the eigenvalue problem with multiple uncertain parameters is analyzed by the sparse grid stochastic collocation method. This method provides an interpolation approach to approximate eigenvalues and eigenvectors' functional dependencies on uncertain parameters. This method repetitively evaluates the deterministic solutions at the pre-selected nodal set to construct a high-dimensional interpolation formula of the result. Taking advantage of the smoothness of the solution in the uncertain space, the sparse grid collocation method can achieve a high order accuracy with a small nodal set. Compared with other sampling based methods, this method converges fast with the increase of the number of points. Some numerical examples with different dimensions are presented to demonstrate the accuracy and efficiency of the sparse grid stochastic collocation method.

  5. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  6. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  7. Adaptive wavelet collocation method on the shallow water model

    NASA Astrophysics Data System (ADS)

    Reckinger, Shanon M.; Vasilyev, Oleg V.; Fox-Kemper, Baylor

    2014-08-01

    This paper presents an integrated approach for modeling several ocean test problems on adaptive grids using novel boundary techniques. The adaptive wavelet collocation method solves the governing equations on temporally and spatially varying meshes, which allows higher effective resolution to be obtained with less computational cost. It is a general method for the solving a large class of partial differential equations, but is applied to the shallow water equations here. In addition to developing wavelet-based computational models, this work also uses an extension of the Brinkman penalization method to represent irregular and non-uniform continental boundaries. This technique is used to enforce no slip boundary conditions through the addition of a term to the field equations. When coupled with the adaptive wavelet collocation method, the flow near the boundary can be well resolved. It is especially useful for simulations of boundary currents and tsunamis, where flow and the boundary is important, thus, those are the test cases presented here.

  8. Collocation and Least Residuals Method and Its Applications

    NASA Astrophysics Data System (ADS)

    Shapeev, Vasily

    2016-02-01

    The collocation and least residuals (CLR) method combines the methods of collocations (CM) and least residuals. Unlike the CM, in the CLR method an approximate solution of the problem is found from an overdetermined system of linear algebraic equations (SLAE). The solution of this system is sought under the requirement of minimizing a functional involving the residuals of all its equations. On the one hand, this added complication of the numerical algorithm expands the capabilities of the CM for solving boundary value problems with singularities. On the other hand, the CLR method inherits to a considerable extent some convenient features of the CM. In the present paper, the CLR capabilities are illustrated on benchmark problems for 2D and 3D Navier-Stokes equations, the modeling of the laser welding of metal plates of similar and different metals, problems investigating strength of loaded parts made of composite materials, boundary-value problems for hyperbolic equations.

  9. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  10. Pseudospectral collocation methods for fourth order differential equations

    NASA Technical Reports Server (NTRS)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  11. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  12. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  13. A new geoid for Brunei Darussalam by the collocation method

    NASA Astrophysics Data System (ADS)

    Lyszkowicz, Adam; Birylo, Monika; Becek, Kazimierz

    2014-12-01

    Computation of a new gravimetric geoid in Brunei was carried out using terrestrial, airborne and altimetric gravity data and the EGM08 geopotential model by the collocation method. The computations were carried out by the "remove-restore" technique. In order to have better insight in the quality of input data the estimation of accuracy of the gravity data and geoid undulations from GPS/levelling data was carried out using EGM08 geopotential model. It shows a poor quality of GPS/levelling data. Result of the computation is the gravimetric geoid for the territory of Brunei computed by collocation method with an accuracy estimated below of 0.3 m. Wyznaczenie przebiegu nowej geoidy na obszarze Brunei zosta?o zrealizowane z wykorzystaniem l?dowych, lotniczych i altimetrycznych danych grawimetrycznych oraz modelu geopotencja?u EGM08 metod? kolokacji. Obliczenia zosta?y przeprowadzone z wykorzystaniem techniki "remove-restore". W celu uzyskania lepszego wgl?du, w jako?? danych wej?ciowych oszacowano dok?adno?? danych grawimetrycznych i geometrycznych odst?pw geoidy od elipsoidy na punkach sieci GPS wykorzystuj?c do tego celu model geopotencjalu EGM08. Z przyprowadzonych oszacowa? wynika przede wszystkim niska dok?adno?? danych GPS/niwelacja. Wynikiem przeprowadzonych oblicze? jest grawimetryczna geoida dla obszaru Brunei, obliczona metod? kolokacji, ktrej dok?adno?? szacuje si? poni?ej 0.3 m.

  14. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  15. Multi-element probabilistic collocation method in high dimensions

    NASA Astrophysics Data System (ADS)

    Foo, Jasmine; Karniadakis, George Em

    2010-03-01

    We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM [1] and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order ? and the effective dimension ?, with ??N, and N the nominal dimension. Numerical tests for multi-dimensional integration and for stochastic elliptic problems suggest that ??? for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

  16. A multidomain spectral collocation method for the Stokes problem

    NASA Technical Reports Server (NTRS)

    Landriani, G. Sacchi; Vandeven, H.

    1989-01-01

    A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.

  17. Numerical solution of first order initial value problem using quartic spline method

    NASA Astrophysics Data System (ADS)

    Ala'yed, Osama; Ying, Teh Yuan; Saaban, Azizan

    2015-12-01

    Any first order initial value problem can be integrated numerically by discretizing the interval of integration into a number of subintervals, either with equally distributed grid points or non-equally distributed grid points. Hence, as the integration advances, the numerical solutions at the grid points are calculated and being known. However, the numerical solutions between the grid points remain unknown. This will form difficulty to individuals who wish to study a particular solution which may not fall on the grid points. Therefore, some sorts of interpolation techniques are needed to deal with such difficulty. Spline interpolation technique remains as a well known approach to approximate the numerical solution of a first order initial value problem, not only at the grid points but also everywhere between the grid points. In this short article, a new quartic spline method has been derived to obtain the numerical solution for first order initial value problem. The key idea of the derivation is to treat the third derivative of the quartic spline function to be a linear polynomial, and obtain the quartic spline function with undetermined coefficients after three integrations. The new quartic spline function is ready to be used when all unknown coefficients are found. We also described an algorithm for the new quartic spline method when used to obtain the numerical solution of any first order initial value problem. Two test problems have been used for numerical experimentations purposes. Numerical results seem to indicate that the new quartic spline method is reliable in solving first order initial value problem. We have compared the numerical results generated by the new quartic spline method with those obtained from an existing spline method. Both methods are found to have comparable accuracy.

  18. Lunar soft landing rapid trajectory optimization using direct collocation method and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Tu, Lianghui; Yuan, Jianping; Luo, Jianjun; Ning, Xin; Zhou, Ruiwu

    2007-11-01

    Direct collocation method has been widely used for trajectory optimization. In this paper, the application of direct optimization method (direct collocation method & nonlinear programming (NLP)) to lunar probe soft-landing trajectory optimization is introduced. Firstly, the model of trajectory optimization control problem to lunar probe soft landing trajectory is established and the equations of motion are simplified respectively based on some reasonable hypotheses. Performance is selected to minimize the fuel consumption. The control variables are thrust attack angle and thrust of engine. Terminal state variable constraints are velocity and altitude constraints. Then, the optimal control problem is transformed into nonlinear programming problem using direct collocation method. The state variables and control variables are selected as optimal parameters at all nodes and collocation nodes. Parameter optimization problem is solved using the SNOPT software package. The simulation results demonstrate that the direct collocation method is not sensitive to lunar soft landing initial conditions; they also show that the optimal solutions of trajectory optimization problem are fairly good in real-time. Therefore, the direct collocation method is a viable approach to lunar probe soft landing trajectory optimization problem.

  19. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Xin; Zhang, Hong-Dong

    2014-06-01

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.

  20. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces.

    PubMed

    Liu, Yi-Xin; Zhang, Hong-Dong

    2014-06-14

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces. PMID:24929368

  1. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces

    SciTech Connect

    Liu, Yi-Xin Zhang, Hong-Dong

    2014-06-14

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.

  2. A Chebyshev spectral collocation method using a staggered grid for the stability of cylindrical flows

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.

    1991-01-01

    A staggered spectral collocation method for the stability of cylindrical flows is developed. In this method the pressure is evaluated at different nodal points than the three velocity components. These modified nodal points do not include the two boundary nodes; therefore the need for the two artificial pressure boundary conditions employed by Khorrami et al. is eliminated. It is shown that the method produces very accurate results and has a better convergence rate than the spectral tau formulation. However, through extensive convergence tests it was found that elimination of the artificial pressure boundary conditions does not result in any significant change in the convergence behavior of spectral collocation methods.

  3. THE LOSS OF ACCURACY OF STOCHASTIC COLLOCATION METHOD IN SOLVING NONLINEAR DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S

    2013-01-01

    n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.

  4. A spectral collocation method for compressible, non-similar boundary layers

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Streett, Craig L.

    1991-01-01

    An efficient and highly accurate algorithm based on a spectral collocation method is developed for numerical solution of the compressible, two-dimensional and axisymmetric boundary layer equations. The numerical method incorporates a fifth-order, fully implicit marching scheme in the streamwise (timelike) dimension and a spectral collocation method based on Chebyshev polynomial expansions in the wall-normal (spacelike) dimension. The spectral collocation algorithm is used to derive the nonsimilar mean velocity and temperature profiles in the boundary layer of a 'fuselage' (cylinder) in a high-speed (Mach 5) flow parallel to its axis. The stability of the flow is shown to be sensitive to the gradual streamwise evolution of the mean flow and it is concluded that the effects of transverse curvature on stability should not be ignored routinely.

  5. Predictor-corrector with cubic spline method for spectrum estimation in Compton scatter correction of SPECT.

    PubMed

    Chen, E Q; Lam, C F

    1994-05-01

    In single photon emission computed tomography (SPECT), Compton scattered photons degrade image contrast and cause erroneous regional activity quantification. A predictor-corrector and cubic spline (PCCS) method for the compensation of Compton scatter in SPECT is proposed. Using spectral information recorded at four energy windows, the PCCS method estimates scatter counts at each window and constructs the scatter spectrum with cubic spline interpolation. We have shown in simulated noise-free situations that this method provides accurate estimation of scatter fractions. A scatter correction employing PCCS method can be implemented on many existing SPECT systems without hardware modification and complicated calibration. PMID:7924268

  6. Understanding a reference-free impedance method using collocated piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo

    2010-03-01

    A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.

  7. NOKIN1D: one-dimensional neutron kinetics based on a nodal collocation method

    NASA Astrophysics Data System (ADS)

    Verdú, G.; Ginestar, D.; Miró, R.; Jambrina, A.; Barrachina, T.; Soler, Amparo; Concejal, Alberto

    2014-06-01

    The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method.

  8. Collocation method for the solution of the neutron transport equation with both symmetric and asymmetric scattering

    SciTech Connect

    Morel, J.E.

    1981-01-01

    A collocation method is developed for the solution of the one-dimensional neutron transport equation in slab geometry with both symmetric and polarly asymmetric scattering. For the symmetric scattering case, it is found that the collocation method offers a combination of some of the best characteristics of the finite-element and discrete-ordinates methods. For the asymmetric scattering case, it is found that the computational cost of cross-section data processing under the collocation approach can be significantly less than that associated with the discrete-ordinates approach. A general diffusion equation treating both symmetric and asymmetric scattering is developed and used in a synthetic acceleration algorithm to accelerate the iterative convergence of collocation solutions. It is shown that a certain type of asymmetric scattering can radically alter the asymptotic behavior of the transport solution and is mathematically equivalent within the diffusion approximation to particle transport under the influence of an electric field. The method is easily extended to other geometries and higher dimensions. Applications exist in the areas of neutron transport with highly anisotropic scattering (such as that associated with hydrogenous media), charged-particle transport, and particle transport in controlled-fusion plasmas. 23 figures, 6 tables.

  9. Improved collocation methods with application to six-degree-of-freedom trajectory optimization

    NASA Astrophysics Data System (ADS)

    Desai, Prasun N.

    2005-11-01

    An improved collocation method is developed for a class of problems that is intractable, or nearly so, by conventional collocation. These are problems in which there are two distinct timescales of the system states, that is, where a subset of the states have high-frequency variations while the remaining states vary comparatively slowly. In conventional collocation, the timescale for the discretization would be set by the need to capture the high-frequency dynamics. The problem then becomes very large and the solution of the corresponding nonlinear programming problem becomes geometrically more time consuming and difficult. A new two-timescale discretization method is developed for the solution of such problems using collocation. This improved collocation method allows the use of a larger time discretization for the low-frequency dynamics of the motion, and a second finer time discretization scheme for the higher-frequency dynamics of the motion. The accuracy of the new method is demonstrated first on an example problem, an optimal lunar ascent. The method is then applied to the type of challenging problem for which it is designed, the optimization of the approach to landing trajectory for a winged vehicle returning from space, the HL-20 lifting body vehicle. The converged solution shows a realistic landing profile and fully captures the higher-frequency rotational dynamics. A source code using the sparse optimizer SNOPT is developed for the use of this method which generates constraint equations, gradients, and the system Jacobian for problems of arbitrary size. This code constitutes a much-improved tool for aerospace vehicle design but has application to all two-timescale optimization problems.

  10. A configurable B-spline parameterization method for structural optimization of wing boxes

    NASA Astrophysics Data System (ADS)

    Yu, Alan Tao

    2009-12-01

    This dissertation presents a synthesis of methods for structural optimization of aircraft wing boxes. The optimization problem considered herein is the minimization of structural weight with respect to component sizes, subject to stress constraints. Different aspects of structural optimization methods representing the current state-of-the-art are discussed, including sequential quadratic programming, sensitivity analysis, parameterization of design variables, constraint handling, and multiple load treatment. Shortcomings of the current techniques are identified and a B-spline parameterization representing the structural sizes is proposed to address them. A new configurable B-spline parameterization method for structural optimization of wing boxes is developed that makes it possible to flexibly explore design spaces. An automatic scheme using different levels of B-spline parameterization configurations is also proposed, along with a constraint aggregation method in order to reduce the computational effort. Numerical results are compared to evaluate the effectiveness of the B-spline approach and the constraint aggregation method. To evaluate the new formulations and explore design spaces, the wing box of an airliner is optimized for the minimum weight subject to stress constraints under multiple load conditions. The new approaches are shown to significantly reduce the computational time required to perform structural optimization and to yield designs that are more realistic than existing methods.

  11. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  12. Direct collocation meshless method for vector radiative transfer in scattering media

    NASA Astrophysics Data System (ADS)

    Ben, Xun; Yi, Hong-Liang; Yin, Xun-Bo; Tan, He-Ping

    2015-09-01

    A direct collocation meshless method based on a moving least-squares approximation is presented to solve polarized radiative transfer in scattering media. Contrasted with methods such as the finite volume and finite element methods that rely on mesh structures (e.g. elements, faces and sides), meshless methods utilize an approximation space based only on the scattered nodes, and no predefined nodal connectivity is required. Several classical cases are examined to verify the numerical performance of the method, including polarized radiative transfer in atmospheric aerosols and clouds with phase functions that are highly elongated in the forward direction. Numerical results show that the collocation meshless method is accurate, flexible and effective in solving one-dimensional polarized radiative transfer in scattering media. Finally, a two-dimensional case of polarized radiative transfer is investigated and analyzed.

  13. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  14. Optimal estimation of parameters of dynamical systems by neural network collocation method

    NASA Astrophysics Data System (ADS)

    Liaqat, Ali; Fukuhara, Makoto; Takeda, Tatsuoki

    2003-02-01

    In this paper we propose a new method to estimate parameters of a dynamical system from observation data on the basis of a neural network collocation method. We construct an object function consisting of squared residuals of dynamical model equations at collocation points and squared deviations of the observations from their corresponding computed values. The neural network is then trained by optimizing the object function. The proposed method is demonstrated by performing several numerical experiments for the optimal estimates of parameters for two different nonlinear systems. Firstly, we consider the weakly and highly nonlinear cases of the Lorenz model and apply the method to estimate the optimum values of parameters for the two cases under various conditions. Then we apply it to estimate the parameters of one-dimensional oscillator with nonlinear damping and restoring terms representing the nonlinear ship roll motion under various conditions. Satisfactory results have been obtained for both the problems.

  15. The assessing method of complete tooth form error based on the spline function

    NASA Astrophysics Data System (ADS)

    Huang, Fugui; Cui, Changcai; Zhang, Rencheng

    2006-11-01

    Having analyzed the shortcoming of current measurement method of involute cylinder gear wheel tooth form error and the reason of error, measurement theory and implementation method of the complete tooth form error of the involute cylindrical gear have been proposed; mathematical model of fitting actual tooth curve based on cubic spline function has been derived and the determination of boundary condition has been given; feasibility of measurement and evaluation method for complete tooth form error has been verified by experiment.

  16. A robust moving mesh method for spectral collocation solutions of time-dependent partial differential equations

    NASA Astrophysics Data System (ADS)

    Subich, Christopher J.

    2015-08-01

    This work extends the machinery of the moving mesh partial differential equation (MMPDE) method to the spectral collocation discretization of time-dependent partial differential equations. Unlike previous approaches which bootstrap the moving grid from a lower-order, finite-difference discretization, this work uses a consistent spectral collocation discretization for both the grid movement problem and the underlying, physical partial differential equation. Additionally, this work develops an error monitor function based on filtering in the spectral domain, which concentrates grid points in areas of locally poor resolution without relying on an assumption of locally steep gradients. This makes the MMPDE method more robust in the presence of rarefaction waves which feature rapid change in higher-order derivatives.

  17. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  18. Spurious Modes in Spectral Collocation Methods with Two Non-Periodic Directions

    NASA Technical Reports Server (NTRS)

    Balachandar, S.; Madabhushi, Ravi K.

    1992-01-01

    Collocation implementation of the Kleiser-Schumann's method in geometries with two non-periodic directions is shown to suffer from three spurious modes - line, column and checkerboard - contaminating the computed pressure field. The corner spurious modes are also present but they do not affect evaluation of pressure related quantities. A simple methodology in the inversion of the influence matrix will efficiently filter out these spurious modes.

  19. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  20. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra.

    PubMed

    Avila, Gustavo; Carrington, Tucker

    2015-12-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates. PMID:26646870

  1. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    NASA Astrophysics Data System (ADS)

    Avila, Gustavo; Carrington, Tucker

    2015-12-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.

  2. A force identification method using cubic B-spline scaling functions

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Luo, Xinjie; Chen, Xuefeng

    2015-02-01

    For force identification, the solution may differ from the desired force seriously due to the unknown noise included in the measured data, as well as the ill-posedness of inverse problem. In this paper, an efficient basis function expansion method based on wavelet multi-resolution analysis using cubic B-spline scaling functions as basis functions is proposed for identifying force history with high accuracy, which can overcome the deficiency of the ill-posed problem. The unknown force is approximated by a set of translated cubic B-spline scaling functions at a certain level and thereby the original governing equation of force identification is reformulated to find the coefficients of scaling functions, which yields a well-posed problem. The proposed method based on wavelet multi-resolution analysis has inherent numerical regularization for inverse problem by changing the level of scaling functions. The number of basis functions employed to approximate the identified force depends on the level of scaling functions. A regularization method for selecting the optimal level of cubic B-spline scaling functions by virtue of condition number of matrix is proposed. In this paper, the validity and applicability of the proposed method are illustrated by two typical examples of Volterra-Fredholm integral equations that are both typical ill-posed problems. Force identification experiments including impact and harmonic forces are conducted on a cantilever beam to compare the accuracy and efficiency of the proposed method with that of the truncated singular value decomposition (TSVD) technique.

  3. A spectral collocation method for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Zang, T. A.; Hussaini, M. Y.

    1984-01-01

    A Fourier-Chebyshev spectral method for the incompressible Navier-Stokes equations is described. It is applicable to a variety of problems including some with fluid properties which vary strongly both in the normal direction and in time. In this fully spectral algorithm, a preconditioned iterative technique is used for solving the implicit equations arising from semi-implicit treatment of pressure, mean advection and vertical diffusion terms. The algorithm is tested by applying it to hydrodynamic stability problems in channel flow and in external boundary layers with both constant and variable viscosity.

  4. A spectral collocation method for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Zang, T. A.; Hussaini, M. Y.

    1985-01-01

    A Fourier-Chebyshev spectral method for the incompressible Navier-Stokes equations is described. It is applicable to a variety of problems including some with fluid properties which vary strongly both in the normal direction and in time. In this fully spectral algorithm, a preconditioned iterative technique is used for solving the implicit equations arising from semi-implicit treatment of pressure, mean advection and vertical diffusion terms. The algorithm is tested by applying it to hydrodynamic stability problems in channel flow and in external boundary layers with both constant and variable viscosity.

  5. A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation

    NASA Technical Reports Server (NTRS)

    Jones, Brandon A.; Anderson, Rodney L.

    2012-01-01

    Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.

  6. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  7. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  8. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    SciTech Connect

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-11-20

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.

  9. Some Optimal Runge-Kutta Collocation Methods for Stiff Problems and DAEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernndez-Abreu, D.; Montijano, J. I.

    2008-09-01

    A new family of implicit Runge-Kutta methods was introduced at ICCAM 2008 (Gent) by the present authors. This family of methods is intended to solve numerically stiff problems and DAEs. The s-stage method (for s?3) has the following features: it is a collocation method depending on a real free parameter ?, has classical convergence order 2s-3 and is strongly A-stable for ? ranging in some nonempty open interval Is = (-?s,0). In addition, for ??Is, all the collocation nodes fall in the interval [0,1]. Moreover, these methods also involve a similar computational cost as that of the corresponding counterpart in the Runge-Kutta Radau IIA family (the method having the same classical order) when solving for their stage values. However, our methods have the additional advantage of possessing a higher stage order than the respective Radau IIA counterparts. This circumstance is important when integrating stiff problems in which case most of numerical methods are affected by an order reduction. In this talk we discuss how to optimize the free parameter depending on the special features of the kind of stiff problems and DAEs to be solved. This point is highly important in order to make competitive our methods when compared with those of the Radau IIA family.

  10. A time domain collocation method for studying the aeroelasticity of a two dimensional airfoil with a structural nonlinearity

    NASA Astrophysics Data System (ADS)

    Dai, Honghua; Yue, Xiaokui; Yuan, Jianping; Atluri, Satya N.

    2014-08-01

    A time domain collocation method for the study of the motion of a two dimensional aeroelastic airfoil with a cubic structural nonlinearity is presented. This method first transforms the governing ordinary differential equations into a system of nonlinear algebraic equations (NAEs), which are then solved by a Jacobian-inverse-free NAE solver. Using the aeroelastic airfoil as a prototypical system, the time domain collocation method is shown here to be mathematically equivalent to the well known high dimensional harmonic balance method. Based on the fact that the high dimensional harmonic balance method is essentially a collocation method in disguise, we clearly explain the aliasing phenomenon of the high dimensional harmonic balance method. On the other hand, the conventional harmonic balance method is also applied. Previous studies show that the harmonic balance method does not produce aliasing in the framework of solving the Duffing equation. However, we demonstrate that a mathematical type of aliasing occurs in the harmonic balance method for the present self-excited nonlinear dynamical system. Besides, a parameter marching procedure is used to sufficiently eliminate the effects of aliasing pertaining to the time domain collocation method. Moreover, the accuracy of the time domain collocation method is compared with the harmonic balance method.

  11. Legendre spectral-collocation method for solving some types of fractional optimal control problems

    PubMed Central

    Sweilam, Nasser H.; Al-Ajami, Tamer M.

    2014-01-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937

  12. Numerical Algorithm Based on Haar-Sinc Collocation Method for Solving the Hyperbolic PDEs

    PubMed Central

    Javadi, H. H. S.; Navidi, H. R.

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  13. Numerical solution of differential-difference equations in large intervals using a Taylor collocation method

    NASA Astrophysics Data System (ADS)

    Tirani, M. Dadkhah; Sohrabi, F.; Almasieh, H.; Kajani, M. Tavassoli

    2015-10-01

    In this paper, a collocation method based on Taylor polynomials is developed for solving systems linear differential-difference equations with variable coefficients defined in large intervals. By using Taylor polynomials and their properties in obtaining operational matrices, the solution of the differential-difference equation system with given conditions is reduced to the solution of a system of linear algebraic equations. We first divide the large interval into M equal subintervals and then Taylor polynomials solutions are obtained in each interval, separately. Some numerical examples are given and results are compared with analytical solutions and other techniques in the literature to demonstrate the validity and applicability of the proposed method.

  14. Legendre spectral-collocation method for solving some types of fractional optimal control problems.

    PubMed

    Sweilam, Nasser H; Al-Ajami, Tamer M

    2015-05-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh-Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937

  15. Numerical algorithm based on Haar-Sinc collocation method for solving the hyperbolic PDEs.

    PubMed

    Pirkhedri, A; Javadi, H H S; Navidi, H R

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  16. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  17. Sinc-Chebyshev collocation method for a class of fractional diffusion-wave equations.

    PubMed

    Mao, Zhi; Xiao, Aiguo; Yu, Zuguo; Shi, Long

    2014-01-01

    This paper is devoted to investigating the numerical solution for a class of fractional diffusion-wave equations with a variable coefficient where the fractional derivatives are described in the Caputo sense. The approach is based on the collocation technique where the shifted Chebyshev polynomials in time and the sinc functions in space are utilized, respectively. The problem is reduced to the solution of a system of linear algebraic equations. Through the numerical example, the procedure is tested and the efficiency of the proposed method is confirmed. PMID:24977177

  18. Sinc-Chebyshev Collocation Method for a Class of Fractional Diffusion-Wave Equations

    PubMed Central

    Mao, Zhi; Xiao, Aiguo; Yu, Zuguo; Shi, Long

    2014-01-01

    This paper is devoted to investigating the numerical solution for a class of fractional diffusion-wave equations with a variable coefficient where the fractional derivatives are described in the Caputo sense. The approach is based on the collocation technique where the shifted Chebyshev polynomials in time and the sinc functions in space are utilized, respectively. The problem is reduced to the solution of a system of linear algebraic equations. Through the numerical example, the procedure is tested and the efficiency of the proposed method is confirmed. PMID:24977177

  19. Simulating the focusing of light onto 1D nanostructures with a B-spline modal method

    NASA Astrophysics Data System (ADS)

    Bouchon, P.; Chevalier, P.; Héron, S.; Pardo, F.; Pelouard, J.-L.; Haïdar, R.

    2015-03-01

    Focusing the light onto nanostructures thanks to spherical lenses is a first step to enhance the field, and is widely used in applications, in particular for enhancing non-linear effects like the second harmonic generation. Nonetheless, the electromagnetic response of such nanostructures, which have subwavelength patterns, to a focused beam can not be described by the simple ray tracing formalism. Here, we present a method to compute the response to a focused beam, based on the B-spline modal method. The simulation of a gaussian focused beam is obtained thanks to a truncated decomposition on plane waves computed on a single period, which limits the computation burden.

  20. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  1. A meshfree local RBF collocation method for anti-plane transverse elastic wave propagation analysis in 2D phononic crystals

    NASA Astrophysics Data System (ADS)

    Zheng, Hui; Zhang, Chuanzeng; Wang, Yuesheng; Sladek, Jan; Sladek, Vladimir

    2016-01-01

    In this paper, a meshfree or meshless local radial basis function (RBF) collocation method is proposed to calculate the band structures of two-dimensional (2D) anti-plane transverse elastic waves in phononic crystals. Three new techniques are developed for calculating the normal derivative of the field quantity required by the treatment of the boundary conditions, which improve the stability of the local RBF collocation method significantly. The general form of the local RBF collocation method for a unit-cell with periodic boundary conditions is proposed, where the continuity conditions on the interface between the matrix and the scatterer are taken into account. The band structures or dispersion relations can be obtained by solving the eigenvalue problem and sweeping the boundary of the irreducible first Brillouin zone. The proposed local RBF collocation method is verified by using the corresponding results obtained with the finite element method. For different acoustic impedance ratios, various scatterer shapes, scatterer arrangements (lattice forms) and material properties, numerical examples are presented and discussed to show the performance and the efficiency of the developed local RBF collocation method compared to the FEM for computing the band structures of 2D phononic crystals.

  2. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  3. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  4. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  5. Particle tracking and multispectral collocation method for particle-to-particle binding assays.

    PubMed

    Rogacs, Anita; Santiago, Juan G

    2014-01-01

    We present a simple-to-implement method for analyzing images of randomly distributed particles transported through a fluidic channel. We term this method particle imaging, tracking and collocation (PITC). Our method uses off-the-shelf optics including a CCD camera, epifluorescence microscope, and a dual-view color separator to image freely suspended particles in a wide variety of microchannels (with optical access for image collection). The particles can be transported via electrophoresis and/or pressure driven flow to increase throughput of analysis. We here describe the implementation of the algorithm and demonstrate and validate three of its capabilities: (1) identification of particle coordinates, (2) tracking of particle motion, and (3) monitoring of particle interaction via collocation analysis. We use Monte Carlo simulations for validation and optimization of the input parameters. We also present an experimental demonstration of the analysis on challenging image data, including a flow of two, interacting Brownian particle populations. In the latter example, we use PITC to detect the presence of target DNA by monitoring the hybridization-induced binding between the two populations of beads, each functionalized with DNA probes complementary to the target molecule. PMID:24266609

  6. An Adaptive Wavelet Collocation Method for Fluid--Structure Interaction - Part II

    NASA Astrophysics Data System (ADS)

    Kevlahan, Nicholas K. R.; Vasilyev, Oleg V.

    2002-11-01

    This is the second of two talks which describe a new way of performing direct numerical simulation of fluid--structure interaction at large Reynolds numbers. Adaptive second generation wavelet collocation tackles the problem of efficiently resolving a large Reynolds number flow in complicated geometries (where grid resolution should depend on both time and location), while Brinkman penalization efficiently implements moving solid boundaries of arbitrary complexity. Since the method is based on the primitive variables formulation of the Navier--Stokes equations, we need to solve a Poisson equation for the pressure at each time step. The wavelet basis provides a natural adaptive multilevel framework for a fast Poisson solver, and we have developed such a solver as part of this work. In Part I we describe the details of hybrid wavelet collocation - Brinkman penalization method for solving the Navier--Stokes equations. Details on wavelet-based Poisson solver will be given as well. In part II we present the results of the application of the method to the two-dimensional fluid-elastic instability of a cylinder in cross-flow at Reynolds numbers from 100 to 9500. The cylinder is either fixed, or its mechanical response is modelled as a forced, damped simple harmonic oscillator.

  7. An Adaptive Wavelet Collocation Method for Fluid--Structure Interaction - Part I

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Kevlahan, Nicholas K. R.

    2002-11-01

    This is the first of two talks which describe a new way of performing direct numerical simulation of fluid--structure interaction at large Reynolds numbers. Adaptive second generation wavelet collocation tackles the problem of efficiently resolving a large Reynolds number flow in complicated geometries (where grid resolution should depend on both time and location), while Brinkman penalization efficiently implements moving solid boundaries of arbitrary complexity. Since the method is based on the primitive variables formulation of the Navier--Stokes equations, we need to solve a Poisson equation for the pressure at each time step. The wavelet basis provides a natural adaptive multilevel framework for a fast Poisson solver, and we have developed such a solver as part of this work. In Part I we describe the details of hybrid wavelet collocation - Brinkman penalization method for solving the Navier--Stokes equations. Details on wavelet-based Poisson solver will be given as well. In part II we present the results of the application of the method to the two-dimensional fluid-elastic instability of a cylinder in cross-flow at Reynolds numbers from 100 to 9500. The cylinder is either fixed, or its mechanical response is modelled as a forced, damped simple harmonic oscillator.

  8. Free vibration analysis of stiffened plates with arbitrary planform by the general spline finite strip method

    NASA Astrophysics Data System (ADS)

    Sheikh, A. H.; Mukhopakhyay, M.

    1993-03-01

    The spline finite strip method which has long been applied to the vibration analysis of bare plate has been extended in this paper to stiffened plates having arbitrary shapes. Both concentrically and eccentrically stiffened plate have been analyzed. The main elegance of the formulation lies in the treatment of the stiffeners. The stiffeners can be placed anywhere within the plate strip and need not be placed on the nodal lines. Stiffened plates having various shapes, boundary conditions, and various disposition of stiffeners have been analyzed by the proposed approach. Comparison with published results indicates excellent agreement.

  9. An adaptive wavelet collocation method for the solution of partial differential equations on the sphere

    NASA Astrophysics Data System (ADS)

    Mehra, Mani; Kevlahan, Nicholas K.-R.

    2008-05-01

    A dynamic adaptive numerical method for solving partial differential equations on the sphere is developed. The method is based on second generation spherical wavelets on almost uniform nested spherical triangular grids, and is an extension of the adaptive wavelet collocation method to curved manifolds. Wavelet decomposition is used for grid adaption and interpolation. An O(N) hierarchical finite difference scheme based on the wavelet multilevel decomposition is used to approximate Laplace-Beltrami, Jacobian and flux-divergence operators. The accuracy and efficiency of the method is demonstrated using linear and nonlinear examples relevant to geophysical flows. Although the present paper considers only the sphere, the strength of this new method is that it can be extended easily to other curved manifolds by considering appropriate coarse approximations to the desired manifold (here we used the icosahedral approximation to the sphere at the coarsest level).

  10. A Chebyshev Collocation Method for Moving Boundaries, Heat Transfer, and Convection During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.

    1994-01-01

    Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.

  11. A novel monocular visual navigation method for cotton-picking robot based on horizontal spline segmentation

    NASA Astrophysics Data System (ADS)

    Xu, ShengYong; Wu, JuanJuan; Zhu, Li; Li, WeiHao; Wang, YiTian; Wang, Na

    2015-12-01

    Visual navigation is a fundamental technique of intelligent cotton-picking robot. There are many components and cover in the cotton field, which make difficulties of furrow recognition and trajectory extraction. In this paper, a new field navigation path extraction method is presented. Firstly, the color image in RGB color space is pre-processed by the OTSU threshold algorithm and noise filtering. Secondly, the binary image is divided into numerous horizontally spline areas. In each area connected regions of neighboring images' vertical center line are calculated by the Two-Pass algorithm. The center points of the connected regions are candidate points for navigation path. Thirdly, a series of navigation points are determined iteratively on the principle of the nearest distance between two candidate points in neighboring splines. Finally, the navigation path equation is fitted by the navigation points using the least squares method. Experiments prove that this method is accurate and effective. It is suitable for visual navigation in the complex environment of cotton field in different phases.

  12. Second-Generation Wavelet Collocation Method for the Solution of Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Bowman, Christopher

    2000-12-01

    An adaptive numerical method for solving partial differential equations is developed. The method is based on the whole new class of second-generation wavelets. Wavelet decomposition is used for grid adaptation and interpolation, while a new O(N) hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The treatment of nonlinear terms and general boundary conditions is a straightforward task due to the collocation nature of the algorithm. In this paper we demonstrate the algorithm for one particular choice of second-generation wavelets, namely lifted interpolating wavelets on an interval with uniform (regular) sampling. The main advantage of using second-generation wavelets is that wavelets can be custom designed for complex domains and irregular sampling. Thus, the strength of the new method is that it can be easily extended to the whole class of second-generation wavelets, leaving the freedom and flexibility to choose the wavelet basis depending on the application.

  13. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-11-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  14. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-01-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  15. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  16. Radial collocation methods for the onset of convection in rotating spheres

    NASA Astrophysics Data System (ADS)

    Sánchez, J.; Garcia, F.; Net, M.

    2016-03-01

    The viability of using collocation methods in radius and spherical harmonics in the angular variables to calculate convective flows in full spherical geometry is examined. As a test problem the stability of the conductive state of a self-gravitating fluid sphere subject to rotation and internal heating is considered. A study of the behavior of different radial meshes previously used by several authors in polar coordinates, including or not the origin, is first performed. The presence of spurious modes due to the treatment of the singularity at the origin, to the spherical harmonics truncation, and to the initialization of the eigenvalue solver is shown, and ways to eliminate them are presented. Finally, to show the usefulness of the method, the neutral stability curves at very high Taylor and moderate and small Prandtl numbers are calculated and shown.

  17. A new method for solving fixed geodetic boundary-value problem based on harmonic splines

    NASA Astrophysics Data System (ADS)

    Safari, Abdolreza; Sharifi, Mohammad Ali

    2010-05-01

    Nowadays, the determination of the earth gravity field has various applications in geodesy and geophysics. The gravity field of the earth is determined via solution of a boundary value problem. Gravimetric data with high quality at the earth surface for Earth's gravity field determination are available. These data provide the necessary boundary data to solve our BVP. Because of precise GNSS-based positioning in gravimetric stations, boundary is assumed as a fixed surface. In this paper, a new method to solve fixed BVP based on harmonic splines is discussed. Algorithmic steps to solve fixed boundary value problem for the Earth surface computation can be described as follows: (i) Remove the effect of a high degree/order ellipsoidal harmonic expansion and centrifugal field at the observation point at the Earth surface (ii) Remove the effect of residual terrain at the observation point (iii) Find an approximation to gravity disturbances at the Earth surfaces by using harmonic spline (iv) Restore the effect of reference field and residual terrain at the surface of the Earth. this new methodology is successfully tested by computation of the surface potential at the west area of Iran.

  18. Study on spline wavelet finite-element method in multi-scale analysis for foundation

    NASA Astrophysics Data System (ADS)

    Xu, Qiang; Chen, Jian-Yun; Li, Jing; Xu, Gang; Yue, Hong-Yuan

    2013-10-01

    A new finite element method (FEM) of B-spline wavelet on the interval (BSWI) is proposed. Through analyzing the scaling functions of BSWI in one dimension, the basic formula for 2D FEM of BSWI is deduced. The 2D FEM of 7 nodes and 10 nodes are constructed based on the basic formula. Using these proposed elements, the multiscale numerical model for foundation subjected to harmonic periodic load, the foundation model excited by external and internal dynamic load are studied. The results show the proposed finite elements have higher precision than the traditional elements with 4 nodes. The proposed finite elements can describe the propagation of stress waves well whenever the foundationmodel excited by external or internal dynamic load. The proposed finite elements can be also used to connect the multi-scale elements. And the proposed finite elements also have high precision to make multi-scale analysis for structure.

  19. A spline finite strip method for analysing local and distortional buckling of thin-walled beams under arbitrary loading

    NASA Astrophysics Data System (ADS)

    Vanerp, G. M.; Menken, C. M.

    1989-05-01

    As part of a research project concerning the interactive buckling of thin-walled beams, a spline finite strip program was developed to determine the buckling loads and modes of these structural elements under arbitrary loading. In this paper, the theory is briefly outlined and a typical example is presented. The influence of the number of strips in a cross section and the number of subdivisions along the length, on the local and distortional buckling loads of folded plate structures loaded in bending and/or shear was studied. The buckling load and mode of a simply supported I-beam loaded by a concentrated force is determined with the spline finite strip method. The example indicates that the spline finite strip method is an efficient tool for analyzing the local and distortional buckling of flat plate assemblies loaded in bending and\\or shear. The simplicity of the conventional finite strip method is preserved, while the problems of dealing with non-periodic buckling modes, shear and non-simple support are eliminated. Due to the high order of approximation, which is achieved with only four degrees of freedom per section of knot, the spline finite strip method displays considerable computing economy compared with the standard finite element methods.

  20. Elastic wave propagation in bars of arbitrary cross section: a generalized Fourier expansion collocation method.

    PubMed

    Lesage, Jonathan C; Bond, Jill V; Sinclair, Anthony N

    2014-09-01

    The problem of elastic wave propagation in an infinite bar of arbitrary cross section is studied via a generalized version of the Fourier expansion collocation method. In the current formulation, the exact three dimensional solution to Navier's equation in cylindrical coordinates is used to obtain the boundary traction vector as a periodic, piecewise continuous/differentiable function of the angular coordinate. Traction free conditions are then met by setting the Fourier coefficients of the boundary traction vector to zero without approximating the bounding surface by multi-sided polygons as in the method presented by Nagaya. The method is derived for a general cross section with no axial planes of symmetry. Using the general formulation it is shown that the symmetric and asymmetric modes decouple for cross sections having one axial plane of symmetry. An efficient algorithm for computing dispersion curves based on the current method is presented and used to obtain the fundamental longitudinal and flexural wave speeds for a bar of elliptical cross section. The results are compared to those obtained by previous researchers using exact and approximate treatments. PMID:25190374

  1. A boundary collocation meshfree method for the treatment of Poisson problems with complex morphologies

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Mai, Weijie; Liang, Bowen; Buchheit, Rudolph G.

    2015-01-01

    A new meshfree method based on a discrete transformation of Green's basis functions is introduced to simulate Poisson problems with complex morphologies. The proposed Green's Discrete Transformation Method (GDTM) uses source points that are located along a virtual boundary outside the problem domain to construct the basis functions needed to approximate the field. The optimal number of Green's functions source points and their relative distances with respect to the problem boundaries are evaluated to obtain the best approximation of the partition of unity condition. A discrete transformation technique together with the boundary point collocation method is employed to evaluate the unknown coefficients of the solution series via satisfying the problem boundary conditions. A comprehensive convergence study is presented to investigate the accuracy and convergence rate of the GDTM. We will also demonstrate the application of this meshfree method for simulating the conductive heat transfer in a heterogeneous materials system and the dissolved aluminum ions concentration in the electrolyte solution formed near a passive corrosion pit.

  2. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows

    NASA Astrophysics Data System (ADS)

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential convergence is achieved rather than polynomial rates. The solution methodology proposed, the CCSLBM, is also extended to three dimensions and a 3D regularized cavity is simulated; the corresponding results are presented and validated. Indications are that the CCSLBM developed and applied herein is robust, efficient, and accurate for computing 2D and 3D low-speed flows. Note also that high-accuracy solutions obtained by applying the CCSLBM can be used as benchmark solutions for the assessment of other LBM-based flow solvers.

  3. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows.

    PubMed

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential convergence is achieved rather than polynomial rates. The solution methodology proposed, the CCSLBM, is also extended to three dimensions and a 3D regularized cavity is simulated; the corresponding results are presented and validated. Indications are that the CCSLBM developed and applied herein is robust, efficient, and accurate for computing 2D and 3D low-speed flows. Note also that high-accuracy solutions obtained by applying the CCSLBM can be used as benchmark solutions for the assessment of other LBM-based flow solvers. PMID:25679733

  4. High-order numerical solutions using cubic splines

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.

  5. The adaptive wavelet collocation method and its application in front simulation

    NASA Astrophysics Data System (ADS)

    Huang, Wenyu; Wu, Rongsheng; Fang, Juan

    2010-05-01

    The adaptive wavelet collocation method (AWCM) is a variable grid technology for solving partial differential equations (PDEs) with high singularities. Based on interpolating wavelets, the AWCM adapts the grid so that a higher resolution is automatically attributed to domain regions with high singularities. Accuracy problems with the AWCM have been reported in the literature, and in this paper problems of efficiency with the AWCM are discussed in detail through a simple one-dimensional (1D) nonlinear advection equation whose analytic solution is easily obtained. A simple and efficient implementation of the AWCM is investigated. Through studying the maximum errors at the moment of frontogenesis of the 1D nonlinear advection equation with different initial values and a comparison with the finite difference method (FDM) on a uniform grid, the AWCM shows good potential for modeling the front efficiently. The AWCM is also applied to a two-dimensional (2D) unbalanced frontogenesis model in its first attempt at numerical simulation of a meteorological front. Some important characteristics about the model are revealed by the new scheme.

  6. Thermoacoustic wave propagation modeling using a dynamically adaptive wavelet collocation method

    SciTech Connect

    Vasilyev, O.V.; Paolucci, S.

    1996-12-31

    When a localized region of a solid wall surrounding a compressible medium is subjected to a sudden temperature change, the medium in the immediate neighborhood of that region expands. This expansion generates pressure waves. These thermally-generated waves are referred to as thermoacoustic (TAC) waves. The main interest in thermoacoustic waves is motivated by their property to enhance heat transfer by inducing convective motion away from the heated area. Thermoacoustic wave propagation in a two-dimensional rectangular cavity is studied numerically. The thermoacoustic waves are generated by raising the temperature locally at the walls. The waves, which decay at large time due to thermal and viscous diffusion, propagate and reflect from the walls creating complicated two-dimensional patterns. The accuracy of numerical simulation is ensured by using a highly accurate, dynamically adaptive, multilevel wavelet collocation method, which allows local refinements to adapt to local changes in solution scales. Subsequently, high resolution computations are performed only in regions of large gradients. The computational cost of the method is independent of the dimensionality of the problem and is O(N), where N is the total number of collation points.

  7. Membrane covered duct lining for high-frequency noise attenuation: prediction using a Chebyshev collocation method.

    PubMed

    Huang, Lixi

    2008-11-01

    A spectral method of Chebyshev collocation with domain decomposition is introduced for linear interaction between sound and structure in a duct lined with flexible walls backed by cavities with or without a porous material. The spectral convergence is validated by a one-dimensional problem with a closed-form analytical solution, and is then extended to the two-dimensional configuration and compared favorably against a previous method based on the Fourier-Galerkin procedure and a finite element modeling. The nonlocal, exact Dirichlet-to-Neumann boundary condition is embedded in the domain decomposition scheme without imposing extra computational burden. The scheme is applied to the problem of high-frequency sound absorption by duct lining, which is normally ineffective when the wavelength is comparable with or shorter than the duct height. When a tensioned membrane covers the lining, however, it scatters the incident plane wave into higher-order modes, which then penetrate the duct lining more easily and get dissipated. For the frequency range of f=0.3-3 studied here, f=0.5 being the first cut-on frequency of the central duct, the membrane cover is found to offer an additional 0.9 dB attenuation per unit axial distance equal to half of the duct height. PMID:19045780

  8. Modeling guided elastic waves in generally anisotropic media using a spectral collocation method.

    PubMed

    Quintanilla, F Hernando; Lowe, M J S; Craster, R V

    2015-03-01

    Guided waves are now well established for some applications in the non-destructive evaluation of structures and offer potential for deployment in a vast array of other cases. For their development, it is important to have reliable and accurate information about the modes that propagate for particular waveguide structures. Essential information that informs choices of mode transducer, operating frequencies, and interpretation of signals, among other issues, is provided by the dispersion curves of different modes within various combinations of geometries and materials. In this paper a spectral collocation method is successfully used to handle the more complicated and realistic waveguide problems that are required in non-destructive evaluation; many pitfalls and limitations found in root-finding routines based on the partial wave method are overcome by using this approach. The general cases presented cover anisotropic homogeneous perfectly elastic materials in flat and cylindrical geometry. Non-destructive evaluation applications include complex waveguide structures, such as single or multi-layered fiber composites, lined, bonded and buried structures. For this reason, arbitrarily multi-layered systems with both solid and fluid layers are also addressed as well as the implementation of interface models of imperfect boundary conditions between layers. PMID:25786933

  9. Estimation of river pollution source using the space-time radial basis collocation method

    NASA Astrophysics Data System (ADS)

    Li, Zi; Mao, Xian-Zhong; Li, Tak Sing; Zhang, Shiyan

    2016-02-01

    River contaminant source identification problems can be formulated as an inverse model to estimate the missing source release history from the observed contaminant plume. In this study, the identification of pollution sources in rivers, where strong advection is dominant, is solved by the global space-time radial basis collocation method (RBCM). To search for the optimal shape parameter and scaling factor which strongly determine the accuracy of the RBCM method, a new cost function based on the residual errors of not only the observed data but also the specified governing equation, the initial and boundary conditions, was constructed for the k-fold cross-validation technique. The performance of three global radial basis functions, Hardy's multiquadric, inverse multiquadric and Gaussian, were also compared in the test cases. The numerical results illustrate that the new cost function is a good indicator to search for near-optimal solutions. Application to a real polluted river shows that the source release history is reasonably recovered, demonstrating that the RBCM with the k-fold cross-validation is a powerful tool for source identification problems in advection-dominated rivers.

  10. The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data.

    SciTech Connect

    Webster, Clayton; Tempone, Raul; Nobile, Fabio

    2007-12-01

    This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

  11. A self-consistent estimate for linear viscoelastic polycrystals with internal variables inferred from the collocation method

    NASA Astrophysics Data System (ADS)

    Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.

    2012-03-01

    The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.

  12. Shape Optimization for Drag Reduction in Linked Bodies using Evolution Strategies and the Hybrid Wavelet Collocation - Brinkman Penalization Method

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Gazzola, Mattia; Koumoutsakos, Petros

    2010-11-01

    In this talk we discuss preliminary results for the use of hybrid wavelet collocation - Brinkman penalization approach for shape optimization for drag reduction in flows past linked bodies. This optimization relies on Adaptive Wavelet Collocation Method along with the Brinkman penalization technique and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Adaptive wavelet collocation method tackles the problem of efficiently resolving a fluid flow on a dynamically adaptive computational grid, while a level set approach is used to describe the body shape and the Brinkman volume penalization allows for an easy variation of flow geometry without requiring body-fitted meshes. We perform 2D simulations of linked bodies in order to investigate whether flat geometries are optimal for drag reduction. In order to accelerate the costly cost function evaluations we exploit the inherent parallelism of ES and we extend the CMA-ES implementation to a multi-host framework. This framework allows for an easy distribution of the cost function evaluations across several parallel architectures and it is not limited to only one computing facility. The resulting optimal shapes are geometrically consistent with the shapes that have been obtained in the pioneering wind tunnel experiments for drag reduction using Evolution Strategies by Ingo Rechenberg.

  13. Ray-tracing method for creeping waves on arbitrarily shaped nonuniform rational B-splines surfaces.

    PubMed

    Chen, Xi; He, Si-Yuan; Yu, Ding-Feng; Yin, Hong-Cheng; Hu, Wei-Dong; Zhu, Guo-Qiang

    2013-04-01

    An accurate creeping ray-tracing algorithm is presented in this paper to determine the tracks of creeping waves (or creeping rays) on arbitrarily shaped free-form parametric surfaces [nonuniform rational B-splines (NURBS) surfaces]. The main challenge in calculating the surface diffracted fields on NURBS surfaces is due to the difficulty in determining the geodesic paths along which the creeping rays propagate. On one single parametric surface patch, the geodesic paths need to be computed by solving the geodesic equations numerically. Furthermore, realistic objects are generally modeled as the union of several connected NURBS patches. Due to the discontinuity of the parameter between the patches, it is more complicated to compute geodesic paths on several connected patches than on one single patch. Thus, a creeping ray-tracing algorithm is presented in this paper to compute the geodesic paths of creeping rays on the complex objects that are modeled as the combination of several NURBS surface patches. In the algorithm, the creeping ray tracing on each surface patch is performed by solving the geodesic equations with a Runge-Kutta method. When the creeping ray propagates from one patch to another, a transition method is developed to handle the transition of the creeping ray tracing across the border between the patches. This creeping ray-tracing algorithm can meet practical requirements because it can be applied to the objects with complex shapes. The algorithm can also extend the applicability of NURBS for electromagnetic and optical applications. The validity and usefulness of the algorithm can be verified from the numerical results. PMID:23595326

  14. Algebraic grid adaptation method using non-uniform rational B-spline surface modeling

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, B. K.

    1992-01-01

    An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.

  15. Univariate splines

    NASA Astrophysics Data System (ADS)

    Kopotun, Kirill A.

    2007-06-01

    Several results on equivalence of moduli of smoothness of univariate splines are obtained. For example, it is shown that, for any 1leq kleq r+1 , 0leq mleq r-1 , and 1leq pleqinfty , the inequality n^{-nu} omega_{k-nu }(s^{(nu)}, n^{-1})_p sim omega_{k} (s, n^{-1})_p , 1leq nu leq min\\{ k, m+1\\} , is satisfied, where sin mathbb{C}^m[-1,1] is a piecewise polynomial of degree leq r on a quasi-uniform (i.eE, the ratio of lengths of the largest and the smallest intervals is bounded by a constant) partition of an interval. Similar results for Chebyshev partitions and weighted Ditzian-Totik moduli of smoothness are also obtained. These results yield simple new constructions and allow considerable simplification of various known proofs in the area of constrained approximation by polynomials and splines.

  16. A direct multi-step Legendre-Gauss collocation method for high-order Volterra integro-differential equation

    NASA Astrophysics Data System (ADS)

    Kajani, M. Tavassoli; Gholampoor, I.

    2015-10-01

    The purpose of this study is to present a new direct method for the approximate solution and approximate derivatives up to order k to the solution for kth-order Volterra integro-differential equations with a regular kernel. This method is based on the approximation by shifting the original problem into a sequence of subintervals. A Legendre-Gauss-Lobatto collocation method is proposed to solving the Volterra integro-differential equation. Numerical examples show that the approximate solutions have a good degree of accuracy.

  17. A meshless scheme for partial differential equations based on multiquadric trigonometric B-spline quasi-interpolation

    NASA Astrophysics Data System (ADS)

    Gao, Wen-Wu; Wang, Zhi-Gang

    2014-11-01

    Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions.

  18. Splines for diffeomorphisms.

    PubMed

    Singh, Nikhil; Vialard, Franois-Xavier; Niethammer, Marc

    2015-10-01

    This paper develops a method for higher order parametric regression on diffeomorphisms for image regression. We present a principled way to define curves with nonzero acceleration and nonzero jerk. This work extends methods based on geodesics which have been developed during the last decade for computational anatomy in the large deformation diffeomorphic image analysis framework. In contrast to previously proposed methods to capture image changes over time, such as geodesic regression, the proposed method can capture more complex spatio-temporal deformations. We take a variational approach that is governed by an underlying energy formulation, which respects the nonflat geometry of diffeomorphisms. Such an approach of minimal energy curve estimation also provides a physical analogy to particle motion under a varying force field. This gives rise to the notion of the quadratic, the cubic and the piecewise cubic splines on the manifold of diffeomorphisms. The variational formulation of splines also allows for the use of temporal control points to control spline behavior. This necessitates the development of a shooting formulation for splines. The initial conditions of our proposed shooting polynomial paths in diffeomorphisms are analogous to the Euclidean polynomial coefficients. We experimentally demonstrate the effectiveness of using the parametric curves both for synthesizing polynomial paths and for regression of imaging data. The performance of the method is compared to geodesic regression. PMID:25980676

  19. Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis

    NASA Astrophysics Data System (ADS)

    Fillion-Gourdeau, F.; Lorin, E.; Bandrauk, A. D.

    2016-02-01

    A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron-molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.

  20. Adaptive subdivision method with crack prevention for rendering Beta-spline objects. Technical report, 7 August 1984-6 August 1987

    SciTech Connect

    Barsky, B.A.; DeRose, T.D.; Dippe, M.D.

    1987-08-06

    Adaptive subdivision is a method of creating polygonal approximations to spline surfaces. An adaptive subdivision algorithm takes an input a spline surface and a tolerance epsilon, and outputs a piecewise planar approximation to the surface that is guaranteed to differ from the actual surface by a distance no greater than epsilon. These algorithms proceed by recursively splitting the surface into smaller subsurfaces, ultimately approximating subsurfaces with planar polyhedra. These algorithms are therefore characterized by the mathematics behind the splitting of a surface, the test that is used to determine when to stop the recursion, and the method by which a subsurface is approximated by polyhedra. Algorithms of this type are currently known for spline techniques such as Bezier and B-splines. This paper describes the Beta-spline curve and surface technique and derive the equations governing the splitting of Beta-spline curves and surfaces. It presents a very general adaptive subdivision algorithm that can be used with a variety of surface techniques. It incorporates splitting criteria based on flatness and prevents cracks from occuring between approximating polyhedra. The tolerance controlling the splitting process may itself be adaptive, so that as an object moves farther away, the tolerance is automatically increased.

  1. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images

    PubMed Central

    Quirs, Elia; Felicsimo, ngel M.; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550

  2. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirs, Elia; Felicsimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550

  3. Interchangeable spline reference guide

    SciTech Connect

    Dolin, R.M.

    1994-05-01

    The WX-Division Integrated Software Tools (WIST) Team evolved from two previous committees, First was the W78 Solid Modeling Pilot Project`s Spline Subcommittee, which later evolved into the Vv`X-Division Spline Committee. The mission of the WIST team is to investigate current CAE engineering processes relating to complex geometry and to develop methods for improving those processes. Specifically, the WIST team is developing technology that allows the Division to use multiple spline representations. We are also updating the contour system (CONSYS) data base to take full advantage of the Division`s expanding electronic engineering process. Both of these efforts involve developing interfaces to commercial CAE systems and writing new software. The WIST team is comprised of members from V;X-11, -12 and 13. This {open_quotes}cross-functional{close_quotes} approach to software development is somewhat new in the Division so an effort is being made to formalize our processes and assure quality at each phase of development. Chapter one represents a theory manual and is one phase of the formal process. The theory manual is followed by a software requirements document, specification document, software verification and validation documents. The purpose of this guide is to present the theory underlying the interchangeable spline technology and application. Verification and validation test results are also presented for proof of principal.

  4. Split spline screw

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (inventor)

    1993-01-01

    A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.

  5. Three-Dimensional Simulations Using an Adaptive Wavelet Collocation Method--Part I: Theory and Application to Fluid-Bluff Body Interaction

    NASA Astrophysics Data System (ADS)

    Kevlahan, Nicholas K.-R.; Goldstein, Daniel E.; Vasilyev, Oleg V.

    2003-11-01

    This is the first of two talks which describe a new method for direct numerical simulation of high Reynolds number 3D flows with complex geometry. Adaptive second generation wavelet collocation tackles the problem of efficiently resolving a large Reynolds number flow in complicated geometries (where grid resolution should depend on both time and location), while Brinkman penalization efficiently implements moving solid boundaries of arbitrary complexity. Since the method is based on the primitive variables formulation of the Navier--Stokes equations, a Poisson equation for the pressure is solved at each time step using a wavelet based multilevel solver developed as a part of this work. In this talk the 3D hybrid wavelet collocation -- Brinkman penalization method for solving the Navier--Stokes equations is described. Details on the wavelet-based Poisson solver will also be given. The flexibility of the adaptive wavelet collocation method is illustrated by applying it to three-dimensional flow past a moving sphere.

  6. Application of collocation spectral domain decomposition method to solve radiative heat transfer in 2D partitioned domains

    NASA Astrophysics Data System (ADS)

    Chen, Shang-Shang; Li, Ben-Wen

    2014-12-01

    A collocation spectral domain decomposition method (CSDDM) based on the influence matrix technique is developed to solve radiative transfer problems within a participating medium of 2D partitioned domains. In this numerical approach, the spatial domains of interest are decomposed into rectangular sub-domains. The radiative transfer equation (RTE) in each sub-domain is angularly discretized by the discrete ordinates method (DOM) with the SRAPN quadrature scheme and then is solved by the CSDDM directly. Three test geometries that include square enclosure and two enclosures with one baffle and one centered obstruction are used to validate the accuracy of the developed method and their numerical results are compared to the data obtained by other researchers. These comparisons indicate that the CSDDM has a good accuracy for all solutions. Therefore this method can be considered as a useful approach for the solution of radiative heat transfer problems in 2D partitioned domains.

  7. Number systems, ?-splines and refinement

    NASA Astrophysics Data System (ADS)

    Zube, Severinas

    2004-12-01

    This paper is concerned with the smooth refinable function on a plane relative with complex scaling factor . Characteristic functions of certain self-affine tiles related to a given scaling factor are the simplest examples of such refinable function. We study the smooth refinable functions obtained by a convolution power of such charactericstic functions. Dahlke, Dahmen, and Latour obtained some explicit estimates for the smoothness of the resulting convolution products. In the case ?=1+i, we prove better results. We introduce ?-splines in two variables which are the linear combination of shifted basic functions. We derive basic properties of ?-splines and proceed with a detailed presentation of refinement methods. We illustrate the application of ?-splines to subdivision with several examples. It turns out that ?-splines produce well-known subdivision algorithms which are based on box splines: Doo-Sabin, Catmull-Clark, Loop, Midedge and some -subdivision schemes with good continuity. The main geometric ingredient in the definition of ?-splines is the fundamental domain (a fractal set or a self-affine tile). The properties of the fractal obtained in number theory are important and necessary in order to determine two basic properties of ?-splines: partition of unity and the refinement equation.

  8. A new version of the map of recent vertical crustal movements in the Carpatho-Balkan region based on the collocation method

    NASA Astrophysics Data System (ADS)

    Nikonov, A. A.; Skryl, V. A.; Lisovets, A. G.

    1987-12-01

    The paper is concerned with the problem of adequate mapping of resulting discrete values of the movement velocity as a field. The authors used a statistical method (collocation) to represent measurement results for the Carpatho-Balkan region as a field of recent crustal movement velocity. The collocation method gives a possibility of avoiding a strong influence that the available geomorphological and geological knowledge of the area and individual notions of map-makers have on the drawing of isolines of velocity movements. The new version of the map of recent vertical movements in the Carpatho-Balkan region shows some notable differences from the 1979 version.

  9. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  10. Monotone and convex quadratic spline interpolation

    NASA Technical Reports Server (NTRS)

    Lam, Maria H.

    1990-01-01

    A method for producing interpolants that preserve the monotonicity and convexity of discrete data is described. It utilizes the quadratic spline proposed by Schumaker (1983) which was subsequently characterized by De Vore and Yan (1986). The selection of first order derivatives at the given data points is essential to this spline. An observation made by De Vore and Yan is generalized, and an improved method to select these derivatives is proposed. The resulting spline is completely local, efficient, and simple to implement.

  11. A Comparison of Some Model Order Reduction Methods for Fast Simulation of Soft Tissue Response using the Point Collocation-based Method of Finite Spheres (PCMFS)

    PubMed Central

    BaniHani, Suleiman; De, Suvranu

    2009-01-01

    In this paper we develop the Point Collocation-based Method of Finite Spheres (PCMFS) to simulate the viscoelastic response of soft biological tissues and evaluate the effectiveness of model order reduction methods such as modal truncation, Hankel optimal model and truncated balanced realization techniques for PCMFS. The PCMFS was developed in [1] as a physics-based technique for real time simulation of surgical procedures. It is a meshfree numerical method in which discretization is performed using a set of nodal points with approximation functions compactly supported on spherical subdomains centered at the nodes. The point collocation method is used as the weighted residual technique where the governing differential equations are directly applied at the nodal points. Since computational speed has a significant role in simulation of surgical procedures, model order reduction methods have been compared for relative gains in efficiency and computational accuracy. Of these methods, truncated balanced realization results in the highest accuracy while modal truncation results in the highest efficiency. PMID:20300494

  12. A pseudo-spectral collocation method applied to the problem of convective diffusive transport in fluids subject to unsteady residual accelerations

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan; Ouazzani, Jalil

    1989-01-01

    The problem of determining the sensitivity of Bridgman-Stockbarger directional solidification experiments to residual accelerations of the type associated with spacecraft in low earth orbit is analyzed numerically using a pseudo-spectral collocation method. The approach employs a novel iterative scheme combining the method of artificial compressibility and a generalized ADI method. The results emphasize the importance of the consideration of residual accelerations and careful selection of the operating conditions in order to take full advantages of the low gravity conditions.

  13. Using spline surfaces in optical design software

    NASA Astrophysics Data System (ADS)

    Gregory, G. Groot; Freniere, Edward R.; Gardner, Leo R.

    2002-09-01

    Splines are commonly used to describe smooth freeform surfaces in Computer Aided Design (CAD) and computer graphic rendering programs. Various spline surface implementations are also available in optical design programs including lens design software. These surface forms may be used to describe general aspheric surfaces, surfaces thermally perturbed and interpolated surfaces from data sets. Splines are often used to fit a surface to a set of data points either on the surface or acting as control points. Spline functions are piecewise cubic polynomials defined over several discrete intervals. Continuity conditions are assigned at the intersections as the function crosses intervals defining a smooth transition. Bi-Cubic splines provide C2 continuity, meaning that the first and second derivatives are equal at the crossover point. C2 continuity is useful outcome of this interpolation for optical surface representation. This analysis will provide a review of the various types of spline interpolation methods used and consider additional forms that may be useful. A summary of the data inputs necessary for two and three-dimensional splines will be included. An assessment will be made for the fitting accuracy of the various types of splines to optical surfaces. And a survey of applications of spline surfaces in optical systems analysis will be presented.

  14. A fractional factorial probabilistic collocation method for uncertainty propagation of hydrologic model parameters in a reduced dimensional space

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.

    2015-10-01

    In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.

  15. Deriving Box-Spline Subdivision Schemes

    NASA Astrophysics Data System (ADS)

    Dodgson, N. A.; Augsdrfer, U. H.; Cashman, T. J.; Sabin, M. A.

    We describe and demonstrate an arrow notation for deriving box-spline subdivision schemes. We compare it with the z-transform, matrix, and mask convolution methods of deriving the same. We show how the arrow method provides a useful graphical alternative to the three numerical methods. We demonstrate the properties that can be derived easily using the arrow method: mask, stencils, continuity in regular regions, safe extrusion directions. We derive all of the symmetric quadrilateral binary box-spline subdivision schemes with up to eight arrows and all of the symmetric triangular binary box-spline subdivision schemes with up to six arrows. We explain how the arrow notation can be extended to handle ternary schemes. We introduce two new binary dual quadrilateral box-spline schemes and one new sqrt2 box-spline scheme. With appropriate extensions to handle extraordinary cases, these could each form the basis for a new subdivision scheme.

  16. A nonclassical Radau collocation method for solving the Lane-Emden equations of the polytropic index 4.75 ≤ α < 5

    NASA Astrophysics Data System (ADS)

    Tirani, M. D.; Maleki, M.; Kajani, M. T.

    2014-11-01

    A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.

  17. Simulation of non-linear free surface motions in a cylindrical domain using a Chebyshev-Fourier spectral collocation method

    NASA Astrophysics Data System (ADS)

    Chern, M. J.; Borthwick, A. G. L.; Eatock Taylor, R.

    2001-06-01

    When a liquid is perturbed, its free surface may experience highly non-linear motions in response. This paper presents a numerical model of the three-dimensional hydrodynamics of an inviscid liquid with a free surface. The mathematical model is based on potential theory in cylindrical co-ordinates with a -transformation applied between the bed and free surface in the vertical direction. Chebyshev spectral elements discretize space in the vertical and radial directions; Fourier spectral elements are used in the angular direction. Higher derivatives are approximated using a collocation (or pseudo-spectral) matrix method. The numerical scheme is validated for non-linear transient sloshing waves in a cylindrical tank containing a circular surface-piercing cylinder at its centre. Excellent agreement is obtained with Ma and Wu's [Second order transient waves around a vertical cylinder in a tank. Journal of Hydrodynamics 1995; Ser. B4: 72-81] second-order potential theory. Further evidence for the capability of the scheme to predict complicated three-dimensional, and highly non-linear, free surface motions is given by the evolution of an impulse wave in a cylindrical tank and in an open domain. Copyright

  18. Systematic assessment of the uncertainty in integrated surface water-groundwater modeling based on the probabilistic collocation method

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Zheng, Yi; Tian, Yong; Wu, Xin; Yao, Yingying; Han, Feng; Liu, Jie; Zheng, Chunmiao

    2014-07-01

    Systematic uncertainty analysis (UA) has rarely been conducted for integrated modeling of surface water-groundwater (SW-GW) systems, which is subject to significant uncertainty, especially at a large basin scale. The main objective of this study was to explore an innovative framework in which a systematic UA can be effectively and efficiently performed for integrated SW-GW models of large river basins and to illuminate how process understanding, model calibration, data collection, and management can benefit from such a systematic UA. The framework is based on the computationally efficient Probabilistic Collocation Method (PCM) linked with a complex simulation model. The applicability and advantages of the framework were evaluated and validated through an integrated SW-GW model for the Zhangye Basin in the middle Heihe River Basin, northwest China. The framework for systematic UA allows for a holistic assessment of the modeling uncertainty, yielding valuable insights into the hydrological processes, model structure, data deficit, and potential effectiveness of management. The study shows that, under the complex SW-GW interactions, the modeling uncertainty has great spatial and temporal variabilities and is highly output-dependent. Overall, this study confirms that a systematic UA should play a critical role in integrated SW-GW modeling of large river basins, and the PCM-based approach is a promising option to fulfill this role.

  19. Spline approximation of quantile functions

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1983-01-01

    The study reported here explored the development and utility of a spline representation of the sample quantile function of a continuous probability distribution in providing a functional description of a random sample and a method of generating random variables. With a spline representation, the random samples are generated by transforming a sample of uniform random variables to the interval of interest. This is useful, for example, in simulation studies in which a random sample represents the only known information about the distribution. The spline formulation considered here consists of a linear combination of cubic basis splines (B-splines) fit in a least squares sense to the sample quantile function using equally spaced knots. The following discussion is presented in five parts. The first section highlights major results realized from the study. The second section further details the results obtained. The methodology used is described in the third section, followed by a brief discussion of previous research on quantile functions. Finally, the results of the study are evaluated.

  20. A Full-Relativistic B-Spline R-Matrix Method for Electron and Photon Collisions with Atoms and Ions

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus

    2008-05-01

    We have extended our B-spline R-matrix (close-coupling) method [1] to fully account for relativistic effects in a Dirac-Coulomb formulation. Our numerical implementation of the close-coupling method enables us to construct term-dependent, non-orthogonal sets of one-electron orbitals for the bound and continuum electrons. This is a critical aspect for complex targets, where individually optimized one-electron orbitals can significantly reduce the size of the multi-configuration expansions needed for an accurate target description. Furthermore, core-valence correlation effets are treated fully ab initio, rather than through semi-empirical, and usually local, model potentials. The method will be described in detail and illustrated by comparing our theoretical predictions for e-Cs collisions with benchmark experiments for angle-integrated and angle-differential cross sections [2], various spin-dependent scattering asymmetries [3], and Stokes parameters measured in superelastic collisions with laser-excited atoms [4]. [1] O. Zatsarinny, Comp. Phys. Commun. 174, 273 (2006). [2] W. Gehenn and E. Reichert, J. Phys. B 10, 3105 (1977). [3] G. Baum et al., Phys. Rev. A 66, 022705 (2002) and 70, 012707 (2004). [4] D.S. Slaughter et al., Phys. Rev. A 75, 062717 (2007).

  1. A Fully Relativistic B-Spline R-Matrix Method for Electron and Photon Collisions with Atoms and Ions

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus

    2008-10-01

    We have extended our B-spline R-matrix (close-coupling) method [1] to fully account for relativistic effects in a Dirac-Coulomb formulation. Our numerical implementation of the close-coupling method enables us to construct term-dependent, non-orthogonal sets of one-electron orbitals for the bound and continuum electrons. This is a critical aspect for complex targets, where individually optimized one-electron orbitals can significantly reduce the size of the multi-configuration expansions needed for an accurate target description. Core-valence correlation effets are treated fully ab initio, rather than through semi-empirical model potentials. The method is described in detail and will be illustrated by comparing our theoretical predictions for e-Cs collisions [2] with benchmark experiments for angle-integrated and angle-differential cross sections [3], various spin-dependent scattering asymmetries [4], and Stokes parameters measured in superelastic collisions with laser-excited atoms [5]. [1] O. Zatsarinny, Comp. Phys. Commun. 174, 273 (2006). [2] O. Zatsarinny and K. Bartschat, Phys. Rev. A 77, 062701 (2008). [3] W. Gehenn and E. Reichert, J. Phys. B 10, 3105 (1977). [4] G. Baum et al., Phys. Rev. A 66, 022705 (2002) and 70, 012707 (2004). [5] D.S. Slaughter et al., Phys. Rev. A 75, 062717 (2007).

  2. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. PMID:22376200

  3. Error estimation in high dimensional space for stochastic collocation methods on arbitrary sparse samples

    NASA Astrophysics Data System (ADS)

    Archibald, Rick

    2013-10-01

    We have develop a fast method that can give high order error estimates of piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used polynomial annihilation to estimate the smoothness of local regions of arbitrary samples in annihilation stochastic simulations. We compare the error estimation of this method to gaussian process error estimation techniques.

  4. Application of Collocation Spectral Method for Irregular Convective-Radiative Fins with Temperature-Dependent Internal Heat Generation and Thermal Properties

    NASA Astrophysics Data System (ADS)

    Sun, Ya-Song; Ma, Jing; Li, Ben-Wen

    2015-10-01

    A collocation spectral method (CSM) is developed to solve the fin heat transfer in triangular, trapezoidal, exponential, concave parabolic, and convex geometries. In the thermal process of fin heat transfer, fin dissipates heat to environment by convection and radiation; internal heat generation, thermal conductivity, heat transfer coefficient, and surface emissivity are functions of temperature; ambient fluid temperature and radiative sink temperature are considered to be nonzero. The temperature in the fin is approximated by Chebyshev polynomials and spectral collocation points. Thus, the differential form of energy equation is transformed into the matrix form of algebraic equation. In order to test efficiency and accuracy of the developed method, five types of convective-radiative fins are examined. Results obtained by the CSM are assessed by comparing available results in references. These comparisons indicate that the CSM can be recommended as a good option to simulate and predict thermal performance of the convective-radiative fins.

  5. Application of Collocation Spectral Method for Irregular Convective-Radiative Fins with Temperature-Dependent Internal Heat Generation and Thermal Properties

    NASA Astrophysics Data System (ADS)

    Sun, Ya-Song; Ma, Jing; Li, Ben-Wen

    2015-11-01

    A collocation spectral method (CSM) is developed to solve the fin heat transfer in triangular, trapezoidal, exponential, concave parabolic, and convex geometries. In the thermal process of fin heat transfer, fin dissipates heat to environment by convection and radiation; internal heat generation, thermal conductivity, heat transfer coefficient, and surface emissivity are functions of temperature; ambient fluid temperature and radiative sink temperature are considered to be nonzero. The temperature in the fin is approximated by Chebyshev polynomials and spectral collocation points. Thus, the differential form of energy equation is transformed into the matrix form of algebraic equation. In order to test efficiency and accuracy of the developed method, five types of convective-radiative fins are examined. Results obtained by the CSM are assessed by comparing available results in references. These comparisons indicate that the CSM can be recommended as a good option to simulate and predict thermal performance of the convective-radiative fins.

  6. Spline algorithms for continuum functions

    SciTech Connect

    Froese Fischer, C.; Idrees, M.

    1989-05-01

    Spline algorithms are described for solving the radial equation for continuum states. The Galerkin method leads to a generalized eigenvalue problem for which the eigenvalue is known so that inverse iteration can be used to determine the eigenvector. Three cases are considered: the equation whose solution is sin kappar, the Coulomb problem, and the hydrogen scattering problem. Plots are presented for the first two cases that show the dependence of the error in the phase shift on spline parameters and execution time. The results for the scattering problem are compared with earlier values.

  7. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Lu, Dan; Ye, Ming; Gunzburger, Max; Webster, Clayton

    2013-10-01

    Bayesian analysis has become vital to uncertainty quantification in groundwater modeling, but its application has been hindered by the computational cost associated with numerous model executions required by exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, a new approach is developed to improve the computational efficiency of Bayesian inference by constructing a surrogate of the PPDF, using an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, this paper utilizes a compactly supported higher-order hierarchical basis to construct the surrogate system, resulting in a significant reduction in the number of required model executions. In addition, using the hierarchical surplus as an error indicator allows locally adaptive refinement of sparse grids in the parameter space, which further improves computational efficiency. To efficiently build the surrogate system for the PPDF with multiple significant modes, optimization techniques are used to identify the modes, for which high-probability regions are defined and components of the aSG-hSC approximation are constructed. After the surrogate is determined, the PPDF can be evaluated by sampling the surrogate system directly without model execution, resulting in improved efficiency of the surrogate-based MCMC compared with conventional MCMC. The developed method is evaluated using two synthetic groundwater reactive transport models. The first example involves coupled linear reactions and demonstrates the accuracy of our high-order hierarchical basis approach in approximating high-dimensional posteriori distribution. The second example is highly nonlinear because of the reactions of uranium surface complexation, and demonstrates how the iterative aSG-hSC method is able to capture multimodal and non-Gaussian features of PPDF caused by model nonlinearity. Both experiments show that aSG-hSC is an effective and efficient tool for Bayesian inference.

  8. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional MCMC sim- ulations. The computational efficiency is expected to be more beneficial to more computational expensive groundwater problems.

  9. A dynamically adaptive multilevel wavelet collocation method for solving partial differential equations in a finite domain

    SciTech Connect

    Valilyev, O.V.; Paolucci, S.

    1996-05-01

    A dynamically adaptive multilevel structure of the algorithm provides a simple way to adapt computational refinements to local demands of the solution. High resolution computations are performed only in regions where sharp transitions occur. The scheme handles general boundary conditions. The method is applied to the solution of the one-dimensional Burgers equation with small viscosity, a moving shock problem, and a nonlinear thermoacoustic wave problem. The results indicate that the method is very accurate and efficient. 16 refs., 9 figs., 2 tab.

  10. Conforming Chebyshev spectral collocation methods for the solution of laminar flow in a constricted channel

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1990-01-01

    The numerical simulation of steady planar two-dimensional, laminar flow of an incompressible fluid through an abruptly contracting channel using spectral domain decomposition methods is described. The key features of the method are the decomposition of the flow region into a number of rectangular subregions and spectral approximations which are pointwise C(1) continuous across subregion interfaces. Spectral approximations to the solution are obtained for Reynolds numbers in the range 0 to 500. The size of the salient corner vortex decreases as the Reynolds number increases from 0 to around 45. As the Reynolds number is increased further the vortex grows slowly. A vortex is detected downstream of the contraction at a Reynolds number of around 175 that continues to grow as the Reynolds number is increased further.

  11. Incidental Learning of Collocation

    ERIC Educational Resources Information Center

    Webb, Stuart; Newton, Jonathan; Chang, Anna

    2013-01-01

    This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.

  12. High Order Continuous Approximation for the Top Order Methods

    NASA Astrophysics Data System (ADS)

    Mazzia, Francesca; Sestini, Alessandra; Trigiante, Donato

    2007-09-01

    The Top Order Methods are a class of linear multistep schemes to be used as Boundary Value Methods and with the feature of having maximal order (2k if k is the number of steps). This often implies that accurate numerical approximations of general BVPs can be produced just using the 3-step TOM. In this work, we consider two different possibilities for defining a continuous approximation of the numerical solution, the standard C1 cubic spline collocating the differential equation at the knots and a C2k-1 spline of degree 2k. The computation of the B-spline coefficients of this higher degree spline requires the solution of N+2k banded linear systems of size 4k×4k. The resulting B-spline function is convergent of order 2k to the exact solution of the continuous BVPs.

  13. Computer program for fitting low-order polynomial splines by method of least squares

    NASA Technical Reports Server (NTRS)

    Smith, P. J.

    1972-01-01

    FITLOS is computer program which implements new curve fitting technique. Main program reads input data, calls appropriate subroutines for curve fitting, calculates statistical analysis, and writes output data. Method was devised as result of need to suppress noise in calibration of multiplier phototube capacitors.

  14. Shape identification technique for a two-dimensional elliptic system by boundary integral equation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1989-01-01

    The geometrical structure of the boundary shape for a two-dimensional boundary value problem is identified. The output least square identification method is considered for estimating partially unknown boundary shapes. A numerical parameter estimation technique using the spline collocation method is proposed.

  15. Detection of defects on apple using B-spline lighting correction method

    NASA Astrophysics Data System (ADS)

    Li, Jiangbo; Huang, Wenqian; Guo, Zhiming

    To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.

  16. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    SciTech Connect

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy search algorithm to find and delete redundant knots based on the estimation of a weight associated with each basis vector. The overall algorithm iterates by inserting and deleting knots and end up with much fewer knots than pixels to represent the object, while the estimation error is within a certain tolerance. Thus, an efficient reconstruction can be obtained which significantly reduces the complexity of the problem. In this thesis, the adaptive B-Spline method is applied to a cross-well tomography problem. The problem comes from the application of finding underground pollution plumes. Cross-well tomography method is applied by placing arrays of electromagnetic transmitters and receivers along the boundaries of the interested region. By utilizing inverse scattering method, a linear inverse model is set up and furthermore the adaptive B-Spline method described above is applied. The simulation results show that the B-Spline method reduces the dimensional complexity by 90%, compared with that o f a pixel-based method, and decreases time complexity by 50% without significantly degrading the estimation.

  17. Interpolation using surface splines.

    NASA Technical Reports Server (NTRS)

    Harder, R. L.; Desmarais, R. N.

    1972-01-01

    A surface spline is a mathematical tool for interpolating a function of two variables. It is based upon the small deflection equation of an infinite plate. The surface spline depends upon the solution of a system of linear equations, and thus, will ordinarily require the use of a digital computer. The closed form solution involves no functions more complicated than logarithms, and is easily coded. Several modifications which can be incorporated are discussed.

  18. On least squares collocation

    NASA Technical Reports Server (NTRS)

    Argentiero, P. D.

    1978-01-01

    It is shown that the least squares collocation approach to estimating geodetic parameters is identical to conventional minimum variance estimation. Hence, the least squares collocation estimator can be derived either by minimizing the usual least squares quadratic loss function or by computing a conditional expectation by means of the regression equation. When a deterministic functional relationship between the data and the parameters to be estimated is available, one can implement a least squares solution using the functional relation to obtain an equation of condition. It is proved the solution so obtained is identical to what is obtained through least squares collocation. The implications of this equivalance for the estimation of mean gravity anomalies are discussed.

  19. Non polynomial B-splines

    NASA Astrophysics Data System (ADS)

    Laksâ, Arne

    2015-11-01

    B-splines are the de facto industrial standard for surface modelling in Computer Aided design. It is comparable to bend flexible rods of wood or metal. A flexible rod minimize the energy when bending, a third degree polynomial spline curve minimize the second derivatives. B-spline is a nice way of representing polynomial splines, it connect polynomial splines to corner cutting techniques, which induces many nice and useful properties. However, the B-spline representation can be expanded to something we can call general B-splines, i.e. both polynomial and non-polynomial splines. We will show how this expansion can be done, and the properties it induces, and examples of non-polynomial B-spline.

  20. Multivariate Spline Algorithms for CAGD

    NASA Technical Reports Server (NTRS)

    Boehm, W.

    1985-01-01

    Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.

  1. An efficient, high-order probabilistic collocation method on sparse grids for three-dimensional flow and solute transport in randomly heterogeneous porous media

    SciTech Connect

    Lin, Guang; Tartakovsky, Alexandre M.

    2009-05-01

    In this study, a probabilistic collocation method (PCM) on sparse grids was used to solve stochastic equations describing flow and transport in three-dimensional in saturated, randomly heterogeneous porous media. Karhunen-Lo\\`{e}ve (KL) decomposition was used to represent the three-dimensional log hydraulic conductivity $Y=\\ln K_s$. The hydraulic head $h$ and average pore-velocity $\\bf v$ were obtained by solving the three-dimensional continuity equation coupled with Darcy's law with random hydraulic conductivity field. The concentration was computed by solving a three-dimensional stochastic advection-dispersion equation with stochastic average pore-velocity $\\bf v$ computed from Darcy's law. PCM is an extension of the generalized polynomial chaos (gPC) that couples gPC with probabilistic collocation. By using the sparse grid points, PCM can handle a random process with large number of random dimensions, with relatively lower computational cost, compared to full tensor products. Monte Carlo (MC) simulations have also been conducted to verify accuracy of the PCM. By comparing the MC and PCM results for mean and standard deviation of concentration, it is evident that the PCM approach is computational more efficient than Monte Carlo simulations. Unlike the conventional moment-equation approach, there is no limitation on the amplitude of random perturbation in PCM. Furthermore, PCM on sparse grids can efficiently simulate solute transport in randomly heterogeneous porous media with large variances.

  2. Uncertainty Quantification in Dynamic Simulations of Large-scale Power System Models using the High-Order Probabilistic Collocation Method on Sparse Grids

    SciTech Connect

    Lin, Guang; Elizondo, Marcelo A.; Lu, Shuai; Wan, Xiaoliang

    2014-01-01

    This paper proposes a probabilistic collocation method (PCM) to quantify the uncertainties with dynamic simulations in power systems. The appraoch was tested on a single-machine-infinite-bus system and the over 15,000 -bus Western Electricity Coordinating Council (WECC) system. Comparing to classic Monte-Carlo (MC) method, the proposed PCM applies the Smolyak algorithm to reduce the number of simulations that have to be performed. Therefore, the computational cost can be greatly reduced using PCM. The algorithm and procedures are described in the paper. Comparison was made with MC method on the single machine as well as the WECC system. The simulation results shows that using PCM only a small number of sparse grid points need to be sampled even when dealing with systems with a relatively large number of uncertain parameters. PCM is, therefore, computationally more efficient than MC method.

  3. Splines: a perfect fit for medical imaging

    NASA Astrophysics Data System (ADS)

    Unser, Michael A.

    2002-05-01

    Splines, which were invented by Schoenberg more than fifty years ago, constitute an elegant framework for dealing with interpolation and discretization problems. They are widely used in computer-aided design and computer graphics, but have been neglected in medical imaging applications, mostly as a consequence of what one may call the bad press phenomenon. Thanks to some recent research efforts in signal processing and wavelet-related techniques, the virtues of splines have been revived in our community. There is now compelling evidence (several independent studies) that splines offer the best cost-performance tradeoff among available interpolation methods. In this presentation, we will argue that the spline representation is ideally suited for all processing tasks that require a continuous model of signals or images. We will show that most forms of spline fitting (interpolation, least squares approximation, smoothing splines) can be performed most efficiently using recursive digital filters. We will also have a look at their multiresolution properties which make them prime candidates for constructing wavelet bases and computing image pyramids. Typical application areas where these techniques can be useful are: image reconstruction from projection data, sampling grid conversion, geometric correction, visualization, rigid or elastic image registration, and feature extraction including edge detection and active contour models.

  4. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  5. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    SciTech Connect

    M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  6. SURVIVAL ESTIMATION USING SPLINES

    EPA Science Inventory

    A non parametric maximum likelihood procedure is given for estimating the survivor function from right-censored data. t approximates the hazard rate by a simple function such as a spline, with different approximations yielding different estimators. pecial case is that proposed by...

  7. Improvements in spectral collocation discretization through a multiple domain technique

    NASA Technical Reports Server (NTRS)

    Macaraeg, M. G.; Streett, C. L.

    1986-01-01

    Spectral collocation methods require that a complicated physical domain must map onto a simple computational domain for discretization. This and other restrictions are presently overcome by splitting the domain into regions, each of which preserve the advantages of spectral collocation, and allow the ratio of mesh spacings between regions to be several orders of magnitude higher than allowable in a single domain.

  8. Shape error concealment using hermite splines.

    PubMed

    Schuster, Guido M; Li, Xiaohuan; Katsaggelos, Aggelos K

    2004-06-01

    The introduction of video objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper, we propose a post-processing shape error-concealment technique that uses geometric boundary information of the received alpha-plane. Second-order Hermite splines are used to model the received boundary in the neighboring blocks, while third order Hermite splines are used to model the missing boundary. The velocities of these splines are matched at the boundary point closest to the missing block. There exists the possibility of multiple concealing splines per group of lost boundary parts. Therefore, we draw every concealment spline combination that does not self-intersect and keep all possible results until the end. At the end, we select the concealment solution that results in one closed boundary. Experimental results demonstrating the performance of the proposed method and comparisons with prior proposed methods are presented. PMID:15648871

  9. Accuracy of a Mitral Valve Segmentation Method Using J-Splines for Real-Time 3D Echocardiography Data

    PubMed Central

    Siefert, Andrew W.; Icenogle, David A.; Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Rossignac, Jarek; Lerakis, Stamatios; Yoganathan, Ajit P.

    2013-01-01

    Patient-specific models of the heart’s mitral valve (MV) exhibit potential for surgical planning. While advances in 3D echocardiography (3DE) have provided adequate resolution to extract MV leaflet geometry, no study has quantitatively assessed the accuracy of their modeled leaflets versus a ground-truth standard for temporal frames beyond systolic closure or for differing valvular dysfunctions. The accuracy of a 3DE-based segmentation methodology based on J-splines was assessed for porcine MVs with known 4D leaflet coordinates within a pulsatile simulator during closure, peak closure, and opening for a control, prolapsed, and billowing MV model. For all time points, the mean distance error between the segmented models and ground-truth data were 0.40±0.32 mm, 0.52±0.51 mm, and 0.74±0.69 mm for the control, flail, and billowing models. For all models and temporal frames, 95% of the distance errors were below 1.64 mm. When applied to a patient data set, segmentation was able to confirm a regurgitant orifice and post-operative improvements in coaptation. This study provides an experimental platform for assessing the accuracy of an MV segmentation methodology at phases beyond systolic closure and for differing MV dysfunctions. Results demonstrate the accuracy of a MV segmentation methodology for the development of future surgical planning tools. PMID:23460042

  10. DeMonS--a new deconvolution method for estimating drug absorbed at different time intervals and/or drug disposition model parameters using a monotonic cubic spline.

    PubMed

    Yu, Z; Hwang, S S; Gupta, S K

    1997-08-01

    DeMonS-a new numerical deconvolution method for estimating the amount of drug absorbed at different time intervals and/or drug disposition model parameters-is presented here. In DeMonS, the amount of drug absorbed at different time intervals and/or drug disposition model parameters are the unknown parameters to be calculated. The Fritsch-Butland non-decreasing cubic spline was constructed from the cumulative amount of drug absorbed-time data directly derived from the calculated amount of drug absorbed at different time intervals. The drug absorption rate, which is the derivative of this non-decreasing cubic spline, is therefore represented by a piecewise non-negative quadratic function. The drug concentrations were obtained by convoluting the drug absorption rate quadratic function with the drug disposition model function. The nonlinear optimization method with simple parameter bounds was used to estimate the optimal set of unknown parameters by minimizing the sum of squares of residuals between the observed and predicted drug concentrations. DeMonS has been applied to (i) the griseofulvin data for estimating drug absorbed at different time intervals when the drug disposition model parameters were determined separately from intravenous data, (ii) veralipride double-peak phenomenon data to estimate simultaneously the percentage of cumulative veralipride absorbed and the veralipride disposition model parameters without reference intravenous data, (iii) a comparative bioequivalence study of gastrointestinal therapeutic system (GITS) pseudoephedrine HCI (PeHCI) controlled-release oral dosage forms when the drug disposition model parameters were not available, and (iv) estimation of both drug disposition model parameters and the absorption rate of drug from Testoderm (testosterone transdermal system) in the presence of endogenous testosterone production. DeMonS was implemented using MATLAB and NAG MATLAB Toolbox, and is available for Windows 3.1. PMID:9267681

  11. {L(1}) Control Theoretic Smoothing Splines

    NASA Astrophysics Data System (ADS)

    Nagahara, Masaaki; Martin, Clyde F.

    2014-11-01

    In this paper, we propose control theoretic smoothing splines with L1 optimality for reducing the number of parameters that describes the fitted curve as well as removing outlier data. A control theoretic spline is a smoothing spline that is generated as an output of a given linear dynamical system. Conventional design requires exactly the same number of base functions as given data, and the result is not robust against outliers. To solve these problems, we propose to use L1 optimality, that is, we use the L1 norm for the regularization term and/or the empirical risk term. The optimization is described by a convex optimization, which can be efficiently solved via a numerical optimization software. A numerical example shows the effectiveness of the proposed method.

  12. Hybrid Adaptive Wavelet Collocation -- Brinkman Penalization -- Ffowcs Williams and Hawkings Method for Compressible Flow Simulation and Far-Field Acoustics Prediction

    NASA Astrophysics Data System (ADS)

    Liu, Qianlong

    2005-11-01

    One of the most practically important problems of computational aero-acoustics is the efficient and accurate calculation of flows around solid obstacles of arbitrary surfaces. To simulate flows in complex domains, we combine two mathematical approaches, the Adaptive Wavelet Collocation Method, which tackles the problem of efficiently resolving localized flow structures in complicated geometries, and the Brinkman Penalization Method, which addresses the problems of efficiently implementing arbitrary complex solid boundaries. Through them, we can resolve and automatically track all the important flow structures on the computational grid that automatically adapts to the solution. To obtain accurate long-time flow simulation and accurately predict far-field acoustics using a relatively small computational domain, appropriate artificial boundary conditions are critical to minimize the contamination by the otherwise reflected spurious waves. Once the near-field accurate simulation is available, Ffowcs Williams and Hawkings (FWH) equations are used to predict the far-field acoustics. The method is applied to a number of acoustics benchmark problems and the results are compared with both the exact and the direct numerical simulation solutions.

  13. Spline screw payload fastening system

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1992-09-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.

  14. Spline screw payload fastening system

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (inventor)

    1993-01-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.

  15. Spline screw payload fastening system

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1993-09-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.

  16. Theory, computation, and application of exponential splines

    NASA Technical Reports Server (NTRS)

    Mccartin, B. J.

    1981-01-01

    A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.

  17. Teaching Collocations for Productive Vocabulary Development.

    ERIC Educational Resources Information Center

    Wei, Yong

    One important but undervalued aspect of productive vocabulary is collocation--the ways in which words are combined with one another. To move from receptive to productive vocabulary, students need to learn a wide variety of ways that words collocate with each other. This paper describes the major types of collocations, typical collocational errors

  18. Learning Collocations: Do the Number of Collocates, Position of the Node Word, and Synonymy Affect Learning?

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2011-01-01

    This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates

  19. Mercury vapor air-surface exchange measured by collocated micrometeorological and enclosure methods - Part II: Bias and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2015-02-01

    Dynamic flux chambers (DFCs) and micrometeorological (MM) methods are extensively deployed for gauging air-surface Hg0 gas exchange. However, a systematic evaluation of the precision of the contemporary Hg0 flux quantification methods is not available. In this study, the uncertainty in Hg0 flux measured by relaxed eddy accumulation (REA) method, aerodynamic gradient method (AGM), modified Bowen-ratio (MBR) method, as well as DFC of traditional (TDFC) and novel (NDFC) designs is assessed using a robust data-set from two field intercomparison campaigns. The absolute precision in Hg0 concentration difference (? C) measurements is estimated at 0.064 ng m-3 for the gradient-based MBR and AGM system. For the REA system, the parameter is Hg0 concentration (C) dependent at 0.069+0.022C. 57 and 62% of the individual vertical gradient measurements were found to be significantly different from zero during the campaigns, while for the REA-technique the percentage of significant observations was lower. For the chambers, non-significant fluxes are confined to a few nighttime periods with varying ambient Hg0 concentration. Relative bias for DFC-derived fluxes is estimated to be ~ 10%, and ~ 85% of the flux bias are within 2 ng m-2 h-1 in absolute term. The DFC flux bias follows a diurnal cycle, which is largely dictated by temperature controls on the enclosed volume. Due to contrasting prevailing micrometeorological conditions, the relative uncertainty (median) in turbulent exchange parameters differs by nearly a factor of two between the campaigns, while that in ? C measurements is fairly stable. The estimated flux uncertainties for the triad of MM-techniques are 16-27, 12-23 and 19-31% (interquartile range) for the AGM, MBR and REA method, respectively. This study indicates that flux-gradient based techniques (MBR and AGM) are preferable to REA in quantifying Hg0 flux over ecosystems with low vegetation height. A limitation of all Hg0 flux measurement systems investigated is their incapability to obtain synchronous samples for the calculation of ? C. This reduces the precision of flux quantification, particularly the MM-systems under non-stationarity of ambient Hg0 concentration. For future applications, it is recommended to accomplish ? C derivation from simultaneous collected samples.

  20. Fitting multidimensional splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  1. Analysis of crustal structure of Venus utilizing residual Line-of-Sight (LOS) gravity acceleration and surface topography data. A trial of global modeling of Venus gravity field using harmonic spline method

    NASA Technical Reports Server (NTRS)

    Fang, Ming; Bowin, Carl

    1992-01-01

    To construct Venus' gravity disturbance field (or gravity anomaly) with the spacecraft-observer line of site (LOS) acceleration perturbation data, both a global and a local approach can be used. The global approach, e.g., spherical harmonic coefficients, and the local approach, e.g., the integral operator method, based on geodetic techniques are generally not the same, so that they must be used separately for mapping long wavelength features and short wavelength features. Harmonic spline, as an interpolation and extrapolation technique, is intrinsically flexible to both global and local mapping of a potential field. Theoretically, it preserves the information of the potential field up to the bound by sampling theorem regardless of whether it is global or local mapping, and is never bothered with truncation errors. The improvement of harmonic spline methodology for global mapping is reported. New basis functions, a singular value decomposition (SVD) based modification to Parker & Shure's numerical procedure, and preliminary results are presented.

  2. Surface deformation over flexible joints using spline blending techniques

    NASA Astrophysics Data System (ADS)

    Haavardsholm, Birgitte; Bratlie, Jostein; Dalmo, Rune

    2014-12-01

    Skinning over a skeleton joint is the process of skin deformation based on joint transformation. Popular geometric skinning techniques include implicit linear blending and dual quaternions. Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended by Ck-smooth basis functions. A smooth skinning surface can be constructed over a transformable skeleton joint by combining various types of local surface constructions and applying local Hermite interpolation. Compared to traditional spline methods, increased flexibility and local control with respect to surface deformation can be achieved using the GERBS blending construction. We present a method using a blending-type spline surface for skinning over a flexible joint, where local geometry is individually adapted to achieve natural skin deformation based on skeleton transformations..

  3. C1 Hermite shape preserving polynomial splines in R3

    NASA Astrophysics Data System (ADS)

    Gabrielides, Nikolaos C.

    2012-06-01

    The C 2 variable degree splines1-3 have been proven to be an efficient tool for solving the curve shape-preserving interpolation problem in two and three dimensions. Based on this representation, the current paper proposes a Hermite interpolation scheme, to construct C 1 shape-preserving splines of variable degree. After this, a slight modification of the method leads to a C 1 shape-preserving Hermite cubic spline. Both methods can easily be developed within a CAD system, since they compute directly (without iterations) the B-spline control polygon. They have been implemented and tested within the DNV Software CAD/CAE system GeniE. [Figure not available: see fulltext.

  4. Self-Aligning, Spline-Locking Fastener

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1992-01-01

    Self-aligning, spline-locking fastener is two-part mechanism operated by robot, using one tool and simple movements. Spline nut on springloaded screw passes through mating spline fitting. Operator turns screw until vertical driving surfaces on spline nut rest against corresponding surfaces of spline fitting. Nut rides upward, drawing pieces together. Used to join two parts of structure, to couple vehicles, or to mount payload in vehicle.

  5. Evaluation of Least-Squares Collocation and the Reduced Point Mass method using the International Association of Geodesy, Joint Study Group 0.3 test data.

    NASA Astrophysics Data System (ADS)

    Tscherning, Carl Christian; Herceg, Matija

    2014-05-01

    The methods of Least-Squares Collocation (LSC) and the Reduced Point Mass method (RPM) both uses radial basis-functions for the representation of the anomalous gravity potential (T). LSC uses as many base-functions as the number of observations, while the RPM method uses as many as deemed necessary. Both methods have been evaluated and for some tests compared in the two areas (Central Europe and South-East Pacific). For both areas test data had been generated using EGM2008. As observational data (a) ground gravity disturbances, (b) airborne gravity disturbances, (c) GOCE like Second order radial derivatives and (d) GRACE along-track potential differences were available. The use of these data for the computation of values of (e) T in a grid was the target of the evaluation and comparison investigation. Due to the fact that T in principle can only be computed using global data, the remove-restore procedure was used, with EGM2008 subtracted (and later added to T) up to degree 240 using dataset (a) and (b) and up to degree 36 for datasets (c) and (d). The estimated coefficient error was accounted for when using LSC and in the calculation of error-estimates. The main result is that T was estimated with an error (computed minus control data, (e) from which EGM2008 to degree 240 or 36 had been subtracted ) as found in the table (LSC used): Area Europe Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.0 0.0 -0.1 -0.1 -0.3 -1.8 Standard deviation4.1 0.8 2.7 32.6 6.0 19.2 Max. difference 19.9 10.4 16.9 69.9 31.3 47.0 Min.difference -16.2 -3.7 -15.5 -92.1 -27.8 -65.5 Area Pacific Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.1 -0.1 -0.1 4.6 -0.2 0.2 Standard deviation4.8 0.2 1.9 49.1 6.7 18.6 Max.difference 22.2 1.8 13.4 115.5 26.9 26.5 Min.difference -28.7 -3.1 -15.7 -106.4 -33.6 22.1 The result using RPM with data-sets (a), (b), (c) gave comparable results. The use of (d) with the RPM method is being implemented. Tests were also done computing dataset (a) from the other datasets. The results here may serve as a bench-mark for other radial basis-function implementations for computing approximations to T. Improvements are certainly possible, e.g. by taking the topography and bathymetry into account.

  6. Learning Collocations: Do the Number of Collocates, Position of the Node Word, and Synonymy Affect Learning?

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2011-01-01

    This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…

  7. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... requirement would compromise network safety. (d) When an incumbent LEC provides physical collocation, virtual... 47 Telecommunication 3 2014-10-01 2014-10-01 false Standards for physical collocation and virtual... § 51.323 Standards for physical collocation and virtual collocation. (a) An incumbent LEC shall...

  8. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... requirement would compromise network safety. (d) When an incumbent LEC provides physical collocation, virtual... 47 Telecommunication 3 2012-10-01 2012-10-01 false Standards for physical collocation and virtual... § 51.323 Standards for physical collocation and virtual collocation. (a) An incumbent LEC shall...

  9. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... requirement would compromise network safety. (d) When an incumbent LEC provides physical collocation, virtual... 47 Telecommunication 3 2013-10-01 2013-10-01 false Standards for physical collocation and virtual... § 51.323 Standards for physical collocation and virtual collocation. (a) An incumbent LEC shall...

  10. Collocations: A Neglected Variable in EFL.

    ERIC Educational Resources Information Center

    Farghal, Mohammed; Obiedat, Hussein

    1995-01-01

    Addresses the issue of collocations as an important and neglected variable in English-as-a-Foreign-Language classes. Two questionnaires, in English and Arabic, involving common collocations relating to food, color, and weather were administered to English majors and English language teachers. Results show both groups deficient in collocations. (36

  11. Multi-quadric collocation model of horizontal crustal movement

    NASA Astrophysics Data System (ADS)

    Chen, G.; Zeng, A. M.; Ming, F.; Jing, Y. F.

    2015-11-01

    To establish the horizontal crustal movement velocity field of the Chinese mainland, a Hardy multi-quadric fitting model and collocation are usually used, but the kernel function, nodes, and smoothing factor are difficult to determine in the Hardy function interpolation, and in the collocation model the covariance function of the stochastic signal must be carefully constructed. In this paper, a new combined estimation method for establishing the velocity field, based on collocation and multi-quadric equation interpolation, is presented. The crustal movement estimation simultaneously takes into consideration an Euler vector as the crustal movement trend and the local distortions as the stochastic signals, and a kernel function of the multi-quadric fitting model substitutes for the covariance function of collocation. The velocities of a set of 1070 reference stations were obtained from the Crustal Movement Observation Network of China (CMONOC), and the corresponding velocity field established using the new combined estimation method. A total of 85 reference stations were used as check points, and the precision in the north and east directions was 1.25 and 0.80 mm yr-1, respectively. The result obtained by the new method corresponds with the collocation method and multi-quadric interpolation without requiring the covariance equation for the signals.

  12. Mr. Stockdale's Dictionary of Collocations.

    ERIC Educational Resources Information Center

    Stockdale, Joseph Gagen, III

    This dictionary of collocations was compiled by an English-as-a-Second-Language (ESL) teacher in Saudi Arabia who teaches adult, native speakers of Arabic. The dictionary is practical in teaching English because it helps to focus on everyday events and situations. The dictionary works as follows: the teacher looks up a word, such as "talk"; next

  13. Asymmetric spline surfaces - Characteristics and applications. [in high quality optical systems design

    NASA Technical Reports Server (NTRS)

    Stacy, J. E.

    1984-01-01

    Asymmetric spline surfaces appear useful for the design of high-quality general optical systems (systems without symmetries). A spline influence function defined as the actual surface resulting from a simple perturbation in the spline definition array shows that a subarea is independent of others four or more points away. Optimization methods presented in this paper are used to vary a reflective spline surface near the focal plane of a decentered Schmidt-Cassegrain to reduce rms spot radii by a factor of 3 across the field.

  14. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    SciTech Connect

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  15. Analysis of chromatograph systems using orthogonal collocation

    NASA Technical Reports Server (NTRS)

    Woodrow, P. T.

    1974-01-01

    Research is generating fundamental engineering design techniques and concepts for the chromatographic separator of a chemical analysis system for an unmanned, Martian roving vehicle. A chromatograph model is developed which incorporates previously neglected transport mechanisms. The numerical technique of orthogonal collocation is studied. To establish the utility of the method, three models of increasing complexity are considered, the latter two being limiting cases of the derived model: (1) a simple, diffusion-convection model; (2) a rate of adsorption limited, inter-intraparticle model; and (3) an inter-intraparticle model with negligible mass transfer resistance.

  16. Cubic spline functions for curve fitting

    NASA Technical Reports Server (NTRS)

    Young, J. D.

    1972-01-01

    FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.

  17. Polynominal Interpolation Methods for Viscous Flow Calculations

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and by Hermitian (Taylor series) finite-difference discretization. The similarities and special features of these different developments are discussed. The governing systems apply for both uniform and variable meshes. Hybrid schemes resulting from two different polynomial approximations for the first and second derivatives lead to a nonuniform mesh extension of the so-called compact or Pad? difference technique (Hermite 4). A variety of fourth-order methods are described and the Hermitian approach is extended to sixth-order (Hermite 6). The appropriate spline boundary conditions are derived for all procedures. For central finite differences, this leads to a two-point, second-order accurate generalization of the commonly used three-point end-difference formula. Solutions with several spline and Hermite procedures are presented for the boundary layer equations, with and without mass transfer, and for the incompressible viscous flow in a driven cavity. Divergence and nondivergence equations are considered for the cavity. Among the fourth-order techniques, it is shown that spline 4 has the smallest truncation error. The spline 4 procedure generally requires one-quarter the number of mesh points in a given coordinate direction as a central finite-difference calculation of equal accuracy. The Hermite 6 procedure leads to remarkably accurate boundary layer solutions.

  18. An Areal Isotropic Spline Filter for Surface Metrology

    PubMed Central

    Zhang, Hao; Tong, Mingsi; Chu, Wei

    2015-01-01

    This paper deals with the application of the spline filter as an areal filter for surface metrology. A profile (2D) filter is often applied in orthogonal directions to yield an areal filter for a three-dimensional (3D) measurement. Unlike the Gaussian filter, the spline filter presents an anisotropic characteristic when used as an areal filter. This disadvantage hampers the wide application of spline filters for evaluation and analysis of areal surface topography. An approximation method is proposed in this paper to overcome the problem. In this method, a profile high-order spline filter serial is constructed to approximate the filtering characteristic of the Gaussian filter. Then an areal filter with isotropic characteristic is composed by implementing the profile spline filter in the orthogonal directions. It is demonstrated that the constructed areal filter has two important features for surface metrology: an isotropic amplitude characteristic and no end effects. Some examples of applying this method on simulated and practical surfaces are analyzed.

  19. Connecting the Dots Parametrically: An Alternative to Cubic Splines.

    ERIC Educational Resources Information Center

    Hildebrand, Wilbur J.

    1990-01-01

    Discusses a method of cubic splines to determine a curve through a series of points and a second method for obtaining parametric equations for a smooth curve that passes through a sequence of points. Procedures for determining the curves and results of each of the methods are compared. (YP)

  20. Recent advances in (soil moisture) triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....

  1. Wavelets of Hermite cubic splines on the interval

    NASA Astrophysics Data System (ADS)

    ?ern, Dana; Fin?k, Vclav; Pla?kov, Gerta

    2012-11-01

    In 2000, W. Dahmen et al. proposed a construction of biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. They started with Hermite cubic splines as the primal scaling bases on R. Then, they constructed dual scaling bases on R consisting of continuous functions with small supports and with polynomial exactness of order 2. Consequently, they derived primal and dual boundary scaling functions retaining the polynomial exactness. This ensures vanishing moments of the corresponding wavelets. Finally, they applied the method of stable completions to construct the corresponding primal and dual multi-wavelets on the interval. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. In this contribution, we shortly review these constructions, use these wavelets to solve numerically differential equations, and compare their performance with a hierarchical basis in finite element methods.

  2. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the

  3. Spline regression hashing for fast image search.

    PubMed

    Liu, Yang; Wu, Fei; Yang, Yi; Zhuang, Yueting; Hauptmann, Alexander G

    2012-10-01

    Techniques for fast image retrieval over large databases have attracted considerable attention due to the rapid growth of web images. One promising way to accelerate image search is to use hashing technologies, which represent images by compact binary codewords. In this way, the similarity between images can be efficiently measured in terms of the Hamming distance between their corresponding binary codes. Although plenty of methods on generating hash codes have been proposed in recent years, there are still two key points that needed to be improved: 1) how to precisely preserve the similarity structure of the original data and 2) how to obtain the hash codes of the previously unseen data. In this paper, we propose our spline regression hashing method, in which both the local and global data similarity structures are exploited. To better capture the local manifold structure, we introduce splines developed in Sobolev space to find the local data mapping function. Furthermore, our framework simultaneously learns the hash codes of the training data and the hash function for the unseen data, which solves the out-of-sample problem. Extensive experiments conducted on real image datasets consisting of over one million images show that our proposed method outperforms the state-of-the-art techniques. PMID:22801510

  4. Collocation analysis for UMLS knowledge-based word sense disambiguation

    PubMed Central

    2011-01-01

    Background The effectiveness of knowledge-based word sense disambiguation (WSD) approaches depends in part on the information available in the reference knowledge resource. Off the shelf, these resources are not optimized for WSD and might lack terms to model the context properly. In addition, they might include noisy terms which contribute to false positives in the disambiguation results. Methods We analyzed some collocation types which could improve the performance of knowledge-based disambiguation methods. Collocations are obtained by extracting candidate collocations from MEDLINE and then assigning them to one of the senses of an ambiguous word. We performed this assignment either using semantic group profiles or a knowledge-based disambiguation method. In addition to collocations, we used second-order features from a previously implemented approach. Specifically, we measured the effect of these collocations in two knowledge-based WSD methods. The first method, AEC, uses the knowledge from the UMLS to collect examples from MEDLINE which are used to train a Nave Bayes approach. The second method, MRD, builds a profile for each candidate sense based on the UMLS and compares the profile to the context of the ambiguous word. We have used two WSD test sets which contain disambiguation cases which are mapped to UMLS concepts. The first one, the NLM WSD set, was developed manually by several domain experts and contains words with high frequency occurrence in MEDLINE. The second one, the MSH WSD set, was developed automatically using the MeSH indexing in MEDLINE. It contains a larger set of words and covers a larger number of UMLS semantic types. Results The results indicate an improvement after the use of collocations, although the approaches have different performance depending on the data set. In the NLM WSD set, the improvement is larger for the MRD disambiguation method using second-order features. Assignment of collocations to a candidate sense based on UMLS semantic group profiles is more effective in the AEC method. In the MSH WSD set, the increment in performance is modest for all the methods. Collocations combined with the MRD disambiguation method have the best performance. The MRD disambiguation method and second-order features provide an insignificant change in performance. The AEC disambiguation method gives a modest improvement in performance. Assignment of collocations to a candidate sense based on knowledge-based methods has better performance. Conclusions Collocations improve the performance of knowledge-based disambiguation methods, although results vary depending on the test set and method used. Generally, the AEC method is sensitive to query drift. Using AEC, just a few selected terms provide a large improvement in disambiguation performance. The MRD method handles noisy terms better but requires a larger set of terms to improve performance. PMID:21658291

  5. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... purchase space in increments small enough to collocate a single rack, or bay, of equipment. (2) Cageless... carrier can purchase space in increments small enough to collocate a single rack, or bay, of equipment....

  6. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... purchase space in increments small enough to collocate a single rack, or bay, of equipment. (2) Cageless... carrier can purchase space in increments small enough to collocate a single rack, or bay, of equipment....

  7. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  8. Spline screw multiple rotations mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling two bodies together and for transmitting torque from one body to another with mechanical timing and sequencing is reported. The mechanical timing and sequencing is handled so that the following criteria are met: (1) the bodies are handled in a safe manner and nothing floats loose in space, (2) electrical connectors are engaged as long as possible so that the internal processes can be monitored throughout by sensors, and (3) electrical and mechanical power and signals are coupled. The first body has a splined driver for providing the input torque. The second body has a threaded drive member capable of rotation and limited translation. The embedded drive member will mate with and fasten to the splined driver. The second body has an embedded bevel gear member capable of rotation and limited translation. This bevel gear member is coaxial with the threaded drive member. A compression spring provides a preload on the rotating threaded member, and a thrust bearing is used for limiting the translation of the bevel gear member so that when the bevel gear member reaches the upward limit of its translation the two bodies are fully coupled and the bevel gear member then rotates due to the input torque transmitted from the splined driver through the threaded drive member to the bevel gear member. An output bevel gear with an attached output drive shaft is embedded in the second body and meshes with the threaded rotating bevel gear member to transmit the input torque to the output drive shaft.

  9. Evaluation of assumptions in soil moisture triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  10. A Localized Tau Method PDE Solver

    NASA Technical Reports Server (NTRS)

    Cottam, Russell

    2002-01-01

    In this paper we present a new form of the collocation method that allows one to find very accurate solutions to time marching problems without the unwelcome appearance of Gibb's phenomenon oscillations. The basic method is applicable to any partial differential equation whose solution is a continuous, albeit possibly rapidly varying function. Discontinuous functions are dealt with by replacing the function in a small neighborhood of the discontinuity with a spline that smoothly connects the function segments on either side of the discontinuity. This will be demonstrated when the solution to the inviscid Burgers equation is discussed.

  11. Reforming Triple Collocation: Beyond Three Estimates and Separation of Structural/Non-structural Errors

    NASA Astrophysics Data System (ADS)

    Pan, M.; Zhan, W.; Fisher, C. K.; Crow, W. T.; Wood, E. F.

    2014-12-01

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the multiple collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is slightly different from the original inner product solution but easier to extend to multiple collocation cases. The Pythagorean solution is fully equivalent to the original inner product solution for the triple collocation case. The multiple collocation turns out to be an over-constrained problem and a least squared solution is presented. As the most critical assumption of uncorrelated errors will almost for sure fail in multiple collocation problems, we propose to divide the source estimates into structural categories and treat the structural and non-structural errors separately. Such error separation allows the source estimates to have their structural errors fully correlated within the same structural category, which is much more realistic than the original assumption. A new error assessment procedure is developed which performs the collocation twice, each for one type of errors, and then sums up the two types of errors. The new procedure is also fully backward compatible with the original triple collocation. Error assessment experiments are carried out for surface soil moisture data from multiple remote sensing models, land surface models, and in situ measurements.

  12. Investigating ESL Learners' Lexical Collocations: The Acquisition of Verb + Noun Collocations by Japanese Learners of English

    ERIC Educational Resources Information Center

    Miyakoshi, Tomoko

    2009-01-01

    Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an

  13. A simplified package for calculating with splines

    NASA Technical Reports Server (NTRS)

    Smith, P. W.

    1974-01-01

    This package is designed to solve some of the elementary problems of spline interpretation and least squares fit. The subroutines fall into three basic categories. The first category involves computations with a given spline and/or knot sequence; the second category involves routines which calculate the coefficients of splines which perform a certain task (such as interpolation or least squares fit); and the last category is a banded equation solver for specific linear equations.

  14. Converting an unstructured quadrilateral/hexahedral mesh to a rational T-spline

    NASA Astrophysics Data System (ADS)

    Wang, Wenyan; Zhang, Yongjie; Xu, Guoliang; Hughes, Thomas J. R.

    2012-07-01

    This paper presents a novel method for converting any unstructured quadrilateral or hexahedral mesh to a generalized T-spline surface or solid T-spline, based on the rational T-spline basis functions. Our conversion algorithm consists of two stages: the topology stage and the geometry stage. In the topology stage, the input quadrilateral or hexahedral mesh is taken as the initial T-mesh. To construct a gap-free T-spline, templates are designed for each type of node and applied to elements in the input mesh. In the geometry stage, an efficient surface fitting technique is developed to improve the surface accuracy with sharp feature preservation. The constructed T-spline surface and solid T-spline interpolate every boundary node in the input mesh, with C 2-continuity everywhere except the local region around irregular nodes. Finally, a Bzier extraction technique is developed and linear independence of the constructed T-splines is studied to facilitate T-spline based isogeometric analysis.

  15. Covariance modeling in geodetic applications of collocation

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko

    2014-05-01

    Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.

  16. Radiation heat transfer model using Monte Carlo ray tracing method on hierarchical ortho-Cartesian meshes and non-uniform rational basis spline surfaces for description of boundaries

    NASA Astrophysics Data System (ADS)

    Kuczyński, Paweł; Białecki, Ryszard

    2014-06-01

    The paper deals with a solution of radiation heat transfer problems in enclosures filled with nonparticipating medium using ray tracing on hierarchical ortho-Cartesian meshes. The idea behind the approach is that radiative heat transfer problems can be solved on much coarser grids than their counterparts from computational fluid dynamics (CFD). The resulting code is designed as an add-on to OpenFOAM, an open-source CFD program. Ortho-Cartesian mesh involving boundary elements is created based upon CFD mesh. Parametric non-uniform rational basis spline (NURBS) surfaces are used to define boundaries of the enclosure, allowing for dealing with domains of complex shapes. Algorithm for determining random, uniformly distributed locations of rays leaving NURBS surfaces is described. The paper presents results of test cases assuming gray diffusive walls. In the current version of the model the radiation is not absorbed within gases. However, the ultimate aim of the work is to upgrade the functionality of the model, to problems in absorbing, emitting and scattering medium projecting iteratively the results of radiative analysis on CFD mesh and CFD solution on radiative mesh.

  17. Collocation and Technicality in EAP Engineering

    ERIC Educational Resources Information Center

    Ward, Jeremy

    2007-01-01

    This article explores how collocation relates to lexical technicality, and how the relationship can be exploited for teaching EAP to second-year engineering students. First, corpus data are presented to show that complex noun phrase formation is a ubiquitous feature of engineering text, and that these phrases (or collocations) are highly

  18. Should We Teach EFL Students Collocations?

    ERIC Educational Resources Information Center

    Bahns, Jens; Eldaw, Moira

    1993-01-01

    German advanced English-as-a-foreign-language (EFL) students' productive knowledge of English collocations consisting of a verb and a noun were investigated in a translation task and a close task. Results suggest that EFL students should concentrate on collocations that cannot readily be paraphrased. The tasks are appended. (32 references)

  19. Supporting Collocation Learning with a Digital Library

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on

  20. Implicit B-spline surface reconstruction.

    PubMed

    Rouhani, Mohammad; Sappa, Angel D; Boyer, Edmond

    2015-01-01

    This paper presents a fast and flexible curve, and surface reconstruction technique based on implicit B-spline. This representation does not require any parameterization and it is locally supported. This fact has been exploited in this paper to propose a reconstruction technique through solving a sparse system of equations. This method is further accelerated to reduce the dimension to the active control lattice. Moreover, the surface smoothness and user interaction are allowed for controlling the surface. Finally, a novel weighting technique has been introduced in order to blend small patches and smooth them in the overlapping regions. The whole framework is very fast and efficient and can handle large cloud of points with very low computational cost. The experimental results show the flexibility and accuracy of the proposed algorithm to describe objects with complex topologies. Comparisons with other fitting methods highlight the superiority of the proposed approach in the presence of noise and missing data. PMID:25373084

  1. Spline Curves, Wire Frames and Bvalue

    NASA Technical Reports Server (NTRS)

    Smith, L.; Munchmeyer, F.

    1985-01-01

    The methods that were developed for wire-frame design are described. The principal tools for control of a curve during interactive design are mathematical ducks. The simplest of these devices is an analog of the draftsman's lead weight that he uses to control a mechanical spline also create Ducks for controlling differential and integral properties of curves were created. Other methods presented include: constructing the end of a Bezier polygon to gain quick and reasonably confident control of the end tangent vector, end curvature and end torsion; keeping the magnitude of unwanted curvature oscillations within tolerance; constructing the railroad curves that appear in many engineering design problems; and controlling the frame to minimize errors at mesh points and to optimize the shapes of the curve elements.

  2. Optimized knot placement for B-splines in deformable image registration

    PubMed Central

    Jacobson, Travis J.; Murphy, Martin J.

    2011-01-01

    Purpose: To develop an automatic knot placement algorithm to enable the use of NonUniform Rational B-Splines (NURBS) in deformable image registration.Methods: The authors developed a two-step approach to fit a known displacement vector field (DVF). An initial fit was made with uniform knot spacing. The error generated by this fit was then assigned as an attractive force pulling on the knots, acting against a resistive spring force in an iterative equilibration scheme. To demonstrate the accuracy gain of knot optimization over uniform knot placement, we compared the sum of the squared errors and the frequency of large errors.Results: Fits were made to a one-dimensional DVF using 120 free knots. Given the same number of free knots, the optimized, nonuniform B-spline fit produced a smaller error than the uniform B-spline fit. The accuracy was improved by a mean factor of 4.02. The optimized B-spline was found to greatly reduce the number of errors more than 1 standard deviation from the mean error of the uniform fit. The uniform B-spline had 15 such errors, while the optimized B-spline had only two. The algorithm was extended to fit a two-dimensional DVF using control point grid sizes ranging from 8??8 to 15??15. Compared with uniform fits, the optimized B-spline fits were again found to reduce the sum of squared errors (mean ratio?=?2.61) and number of large errors (mean ratio?=?4.50).Conclusions: Nonuniform B-splines offer an attractive alternative to uniform B-splines in modeling the DVF. They carry forward the mathematical compactness of B-splines while simultaneously introducing new degrees of freedom. The increased adaptability of knot placement gained from the generalization to NURBS offers increased local control as well as the ability to explicitly represent topological discontinuities. PMID:21928630

  3. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  4. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    SciTech Connect

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.; Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.

    2008-07-14

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensity that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.

  5. Multicategorical Spline Model for Item Response Theory.

    ERIC Educational Resources Information Center

    Abrahamowicz, Michal; Ramsay, James O.

    1992-01-01

    A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)

  6. Modelling Childhood Growth Using Fractional Polynomials and Linear Splines

    PubMed Central

    Tilling, Kate; Macdonald-Wallis, Corrie; Lawlor, Debbie A.; Hughes, Rachael A.; Howe, Laura D.

    2014-01-01

    Background There is increasing emphasis in medical research on modelling growth across the life course and identifying factors associated with growth. Here, we demonstrate multilevel models for childhood growth either as a smooth function (using fractional polynomials) or a set of connected linear phases (using linear splines). Methods We related parental social class to height from birth to 10 years of age in 5,588 girls from the Avon Longitudinal Study of Parents and Children (ALSPAC). Multilevel fractional polynomial modelling identified the best-fitting model as being of degree 2 with powers of the square root of age, and the square root of age multiplied by the log of age. The multilevel linear spline model identified knot points at 3, 12 and 36 months of age. Results Both the fractional polynomial and linear spline models show an initially fast rate of growth, which slowed over time. Both models also showed that there was a disparity in length between manual and non-manual social class infants at birth, which decreased in magnitude until approximately 1 year of age and then increased. Conclusions Multilevel fractional polynomials give a more realistic smooth function, and linear spline models are easily interpretable. Each can be used to summarise individual growth trajectories and their relationships with individual-level exposures. PMID:25413651

  7. Spline-based sparse tomographic reconstruction with Besov priors

    NASA Astrophysics Data System (ADS)

    Sakhaee, Elham; Entezari, Alireza

    2015-03-01

    Tomographic reconstruction from limited X-ray data is an ill-posed inverse problem. A common Bayesian approach is to search for the maximum a posteriori (MAP) estimate of the unknowns that integrates the prior knowledge, about the nature of biomedical images, into the reconstruction process. Recent results on the Bayesian inversion have shown the advantages of Besov priors for the convergence of the estimates as the discretization of the image is refined. We present a spline framework for sparse tomographic reconstruction that leverages higher-order basis functions for image discretization while incorporating Besov space priors to obtain the MAP estimate. Our method leverages tensor-product B-splines and box splines, as higher order basis functions for image discretization, that are shown to improve accuracy compared to the standard, first-order, pixel-basis. Our experiments show that the synergy produced from higher order B-splines for image discretization together with the discretization-invariant Besov priors leads to significant improvements in tomographic reconstruction. The advantages of the proposed Bayesian inversion framework are examined for image reconstruction from limited number of projections in a few-view setting.

  8. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  9. A Fast Adaptive Wavelet Collocation Algorithm for Multidimensional PDEs

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Paolucci, Samuel

    1997-11-01

    A fast multilevel wavelet collocation method for the solution of partial differential equations in multiple dimensions is developed. The computational cost of the algorithm is independent of the dimensionality of the problem and is O( N), where N is the total number of collocation points. The method can handle general boundary conditions. The multilevel structure of the algorithm provides a simple way to adapt computational refinements to local demands of the solution. High resolution computations are performed only in regions where singularities or sharp transitions occur. Numerical results demonstrate the ability of the method to resolve localized structures such as shocks, which change their location and steepness in space and time. The present results indicate that the method has clear advantages in comparison with well established numerical algorithms.

  10. An adaptive three-dimensional RHT-splines formulation in linear elasto-statics and elasto-dynamics

    NASA Astrophysics Data System (ADS)

    Nguyen-Thanh, N.; Muthu, J.; Zhuang, X.; Rabczuk, T.

    2014-02-01

    An adaptive three-dimensional isogeometric formulation based on rational splines over hierarchical T-meshes (RHT-splines) for problems in elasto-statics and elasto-dynamics is presented. RHT-splines avoid some short-comings of NURBS-based formulations; in particular they allow for adaptive h-refinement with ease. In order to drive the adaptive refinement, we present a recovery-based error estimator for RHT-splines. The method is applied to several problems in elasto-statics and elasto-dynamics including three-dimensional modeling of thin structures. The results are compared to analytical solutions and results of NURBS based isogeometric formulations.

  11. Validation of significant wave height product from Envisat ASAR using triple collocation

    NASA Astrophysics Data System (ADS)

    Wang, H.; Shi, C. Y.; Zhu, J. H.; Huang, X. Q.; Chen, C. T.

    2014-03-01

    Nowadays, spaceborne Synthetic Aperture Radar (SAR) has become a powerful tool for providing significant wave height. Traditionally, validation of SAR derived ocean wave height has been carried out against buoy measurements or model outputs, which only yield a inter-comparison, but not an 'absolute' validation. In this study, the triple collocation error model has been introduced in the validation of Envisat ASAR level 2 data. Significant wave height data from ASAR were validated against in situ buoy data, and wave model hindcast results from WaveWatch III, covering a period of six years. The impact of the collocation distance on the error of ASAR wave height was discussed. From the triple collocation validation analysis, it is found that the error of Envisat ASAR significant wave height product is linear to the collocation distance, and decrease with the decreasing collocation distance. Using the linear regression fit method, the absolute error of Envisat ASAR wave height was obtained with zero collocation distance. The absolute Envisat ASAR wave height error of 0.49m is presented in deep and open ocean from this triple collocation validation work.

  12. Combined Spline and B-spline for an improved automatic skin lesion segmentation in dermoscopic images using optimal color channel.

    PubMed

    Abbas, A A; Guo, X; Tan, W H; Jalab, H A

    2014-08-01

    In a computerized image analysis environment, the irregularity of a lesion border has been used to differentiate between malignant melanoma and other pigmented skin lesions. The accuracy of the automated lesion border detection is a significant step towards accurate classification at a later stage. In this paper, we propose the use of a combined Spline and B-spline in order to enhance the quality of dermoscopic images before segmentation. In this paper, morphological operations and median filter were used first to remove noise from the original image during pre-processing. Then we proceeded to adjust image RGB values to the optimal color channel (green channel). The combined Spline and B-spline method was subsequently adopted to enhance the image before segmentation. The lesion segmentation was completed based on threshold value empirically obtained using the optimal color channel. Finally, morphological operations were utilized to merge the smaller regions with the main lesion region. Improvement on the average segmentation accuracy was observed in the experimental results conducted on 70 dermoscopic images. The average accuracy of segmentation achieved in this paper was 97.21 % (where, the average sensitivity and specificity were 94 % and 98.05 % respectively). PMID:24957396

  13. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  14. Sparse grid collocation schemes for stochastic natural convection problems

    SciTech Connect

    Ganapathysubramanian, Baskar; Zabaras, Nicholas . E-mail: zabaras@cornell.edu

    2007-07-01

    In recent years, there has been an interest in analyzing and quantifying the effects of random inputs in the solution of partial differential equations that describe thermal and fluid flow problems. Spectral stochastic methods and Monte-Carlo based sampling methods are two approaches that have been used to analyze these problems. As the complexity of the problem or the number of random variables involved in describing the input uncertainties increases, these approaches become highly impractical from implementation and convergence points-of-view. This is especially true in the context of realistic thermal flow problems, where uncertainties in the topology of the boundary domain, boundary flux conditions and heterogeneous physical properties usually require high-dimensional random descriptors. The sparse grid collocation method based on the Smolyak algorithm offers a viable alternate method for solving high-dimensional stochastic partial differential equations. An extension of the collocation approach to include adaptive refinement in important stochastic dimensions is utilized to further reduce the numerical effort necessary for simulation. We show case the collocation based approach to efficiently solve natural convection problems involving large stochastic dimensions. Equilibrium jumps occurring due to surface roughness and heterogeneous porosity are captured. Comparison of the present method with the generalized polynomial chaos expansion and Monte-Carlo methods are made.

  15. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  16. Septic spline solutions of sixth-order boundary value problems

    NASA Astrophysics Data System (ADS)

    Siddiqi, Shahid S.; Akram, Ghazala

    2008-05-01

    Septic spline is used for the numerical solution of the sixth-order linear, special case boundary value problem. End conditions for the definition of septic spline are derived, consistent with the sixth-order boundary value problem. The algorithm developed approximates the solution and their higher-order derivatives. The method has also been proved to be second-order convergent. Three examples are considered for the numerical illustrations of the method developed. The method developed in this paper is also compared with that developed in [M. El-Gamel, J.R. Cannon, J. Latour, A.I. Zayed, Sinc-Galerkin method for solving linear sixth order boundary-value problems, Mathematics of Computation 73, 247 (2003) 1325-1343], as well and is observed to be better.

  17. Calculating the 2D motion of lumbar vertebrae using splines.

    PubMed

    McCane, Brendan; King, Tamara I; Abbott, J Haxby

    2006-01-01

    In this study we investigate the use of splines and the ICP method [Besl, P., McKay, N., 1992. A method for registration of 3d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 239-256.] for calculating the transformation parameters for a rigid body undergoing planar motion parallel to the image plane. We demonstrate the efficacy of the method by estimating the finite centre of rotation and angle of rotation from lateral flexion/extension radiographs of the lumbar spine. In an in vitro error study, the method displayed an average error of rotation of 0.44 +/- 0.45 degrees, and an average error in FCR calculation of 7.6 +/- 8.5 mm. The method was shown to be superior to that of Crisco et al. [Two-dimensional rigid-body kinematics using image contour registration. Journal of Biomechanics 28(1), 119-124.] and Brinckmann et al. [Quantification of overload injuries of the thoracolumbar spine in persons exposed to heavy physical exertions or vibration at the workplace: Part i - the shape of vertebrae and intervertebral discs - study of a yound, healthy population and a middle-aged control group. Clinical Biomechanics Supplement 1, S5-S83.] for the tests performed here. In general, we believe the use of splines to represent planar shapes to be superior to using digitised curves or landmarks for several reasons. First, with appropriate software, splines require less effort to define and are a compact representation, with most vertebra outlines using less than 30 control points. Second, splines are inherently sub-pixel representations of curves, even if the control points are limited to pixel resolutions. Third, there is a well-defined method (the ICP algorithm) for registering shapes represented as splines. Finally, like digitised curves, splines are able to represent a large class of shapes with little effort, but reduce potential segmentation errors from two dimensions (parallel and perpendicular to the image gradient) to just one (parallel to the image gradient). We have developed an application for performing all the necessary computations which can be downloaded from http://www.claritysmart.com. PMID:16325826

  18. Non-uniform exponential tension splines

    NASA Astrophysics Data System (ADS)

    Bosner, Tina; Rogina, Mladen

    2007-11-01

    We describe explicitly each stage of a numerically stable algorithm for calculating with exponential tension B-splines with non-uniform choice of tension parameters. These splines are piecewisely in the kernel of D 2(D 2?p 2), where D stands for ordinary derivative, defined on arbitrary meshes, with a different choice of the tension parameter p on each interval. The algorithm provides values of the associated B-splines and their generalized and ordinary derivatives by performing positive linear combinations of positive quantities, described as lower-order exponential tension splines. We show that nothing else but the knot insertion algorithm and good approximation of a few elementary functions is needed to achieve machine accuracy. The underlying theory is that of splines based on Chebyshev canonical systems which are not smooth enough to be ECC-systems. First, by de Boor algorithm we construct exponential tension spline of class C 1, and then we use quasi-Oslo type algorithms to evaluate classical non-uniform C 2 tension exponential splines.

  19. Vertex collocation profiles: theory, computation, and results.

    PubMed

    Lichtenwalter, Ryan N; Chawla, Nitesh V

    2014-01-01

    We describe the vertex collocation profile (VCP) concept. VCPs provide rich information about the surrounding local structure of embedded vertex pairs. VCP analysis offers a new tool for researchers and domain experts to understand the underlying growth mechanisms in their networks and to analyze link formation mechanisms in the appropriate sociological, biological, physical, or other context. The same resolution that gives the VCP method its analytical power also enables it to perform well when used to accomplish link prediction. We first develop the theory, mathematics, and algorithms underlying VCPs. We provide timing results to demonstrate that the algorithms scale well even for large networks. Then we demonstrate VCP methods performing link prediction competitively with unsupervised and supervised methods across different network families. Unlike many analytical tools, VCPs inherently generalize to multirelational data, which provides them with unique power in complex modeling tasks. To demonstrate this, we apply the VCP method to longitudinal networks by encoding temporally resolved information into different relations. In this way, the transitions between VCP elements represent temporal evolutionary patterns in the longitudinal network data. Results show that VCPs can use this additional data, typically challenging to employ, to improve predictive model accuracies. We conclude with our perspectives on the VCP method and its future in network science, particularly link prediction. PMID:25392767

  20. The Effect of Input Enhancement of Collocations in Reading on Collocation Learning and Retention of EFL Learners

    ERIC Educational Resources Information Center

    Goudarzi, Zahra; Moini, M. Raouf

    2012-01-01

    Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…

  1. An empirical understanding of triple collocation evaluation measure

    NASA Astrophysics Data System (ADS)

    Scipal, Klaus; Doubkova, Marcela; Hegyova, Alena; Dorigo, Wouter; Wagner, Wolfgang

    2013-04-01

    Triple collocation method is an advanced evaluation method that has been used in the soil moisture field for only about half a decade. The method requires three datasets with an independent error structure that represent an identical phenomenon. The main advantages of the method are that it a) doesn't require a reference dataset that has to be considered to represent the truth, b) limits the effect of random and systematic errors of other two datasets, and c) simultaneously assesses the error of three datasets. The objective of this presentation is to assess the triple collocation error (Tc) of the ASAR Global Mode Surface Soil Moisture (GM SSM 1) km dataset and highlight problems of the method related to its ability to cancel the effect of error of ancillary datasets. In particular, the goal is to a) investigate trends in Tc related to the change in spatial resolution from 5 to 25 km, b) to investigate trends in Tc related to the choice of a hydrological model, and c) to study the relationship between Tc and other absolute evaluation methods (namely RMSE and Error Propagation EP). The triple collocation method is implemented using ASAR GM, AMSR-E, and a model (either AWRA-L, GLDAS-NOAH, or ERA-Interim). First, the significance of the relationship between the three soil moisture datasets was tested that is a prerequisite for the triple collocation method. Second, the trends in Tc related to the choice of the third reference dataset and scale were assessed. For this purpose the triple collocation is repeated replacing AWRA-L with two different globally available model reanalysis dataset operating at different spatial resolution (ERA-Interim and GLDAS-NOAH). Finally, the retrieved results were compared to the results of the RMSE and EP evaluation measures. Our results demonstrate that the Tc method does not eliminate the random and time-variant systematic errors of the second and the third dataset used in the Tc. The possible reasons include the fact a) that the TC method could not fully function with datasets acting at very different spatial resolutions, or b) that the errors were not fully independent as initially assumed.

  2. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  3. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  4. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V., II

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  5. Data approximation using a blending type spline construction

    SciTech Connect

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.

  6. Convex Interpolating Splines of Arbitrary Degree

    NASA Technical Reports Server (NTRS)

    Neuman, E.

    1985-01-01

    Shape preserving approximations are constructed by interpolating the data with polynomial splines of arbitrary degree. A regularity condition is formulated on the data which insures the existence of such a shape preserving spline, an algorithm is presented for its construction, and the uniform norm of the error is bound which results when the algorithm is used to produce an approximation to a given f epsilon Ca,b.

  7. Frequency of Input and L2 Collocational Processing: A Comparison of Congruent and Incongruent Collocations

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2013-01-01

    This study investigated the influence of frequency effects on the processing of congruent (i.e., having an equivalent first language [L1] construction) collocations and incongruent (i.e., not having an equivalent L1 construction) collocations in a second language (L2). An acceptability judgment task was administered to native and advanced

  8. The Impact of Corpus-Based Collocation Instruction on Iranian EFL Learners' Collocation Learning

    ERIC Educational Resources Information Center

    Ashouri, Shabnam; Arjmandi, Masoume; Rahimi, Ramin

    2014-01-01

    Over the past decades, studies of EFL/ESL vocabulary acquisition have identified the significance of collocations in language learning. Due to the fact that collocations have been regarded as one of the major concerns of both EFL teachers and learners for many years, the present study attempts to shed light on the impact of corpus-based

  9. Analysis of harmonic spline gravity models for Venus and Mars

    NASA Technical Reports Server (NTRS)

    Bowin, Carl

    1986-01-01

    Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.

  10. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  11. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  12. An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations

    SciTech Connect

    Ma Xiang; Zabaras, Nicholas

    2009-05-01

    In recent years, there has been a growing interest in analyzing and quantifying the effects of random inputs in the solution of ordinary/partial differential equations. To this end, the spectral stochastic finite element method (SSFEM) is the most popular method due to its fast convergence rate. Recently, the stochastic sparse grid collocation method has emerged as an attractive alternative to SSFEM. It approximates the solution in the stochastic space using Lagrange polynomial interpolation. The collocation method requires only repetitive calls to an existing deterministic solver, similar to the Monte Carlo method. However, both the SSFEM and current sparse grid collocation methods utilize global polynomials in the stochastic space. Thus when there are steep gradients or finite discontinuities in the stochastic space, these methods converge very slowly or even fail to converge. In this paper, we develop an adaptive sparse grid collocation strategy using piecewise multi-linear hierarchical basis functions. Hierarchical surplus is used as an error indicator to automatically detect the discontinuity region in the stochastic space and adaptively refine the collocation points in this region. Numerical examples, especially for problems related to long-term integration and stochastic discontinuity, are presented. Comparisons with Monte Carlo and multi-element based random domain decomposition methods are also given to show the efficiency and accuracy of the proposed method.

  13. ANALYSIS ON CENSORED QUANTILE RESIDUAL LIFE MODEL VIA SPLINE SMOOTHING

    PubMed Central

    Ma, Yanyuan; Wei, Ying

    2013-01-01

    We propose a general class of quantile residual life models, where a specific quantile of the residual life time, conditional on an individual has survived up to time t, is a function of certain covariates with their coefficients varying over time. The varying coefficients are assumed to be smooth unspecified functions of t. We propose to estimate the coefficient functions using spline approximation. Incorporating the spline representation directly into a set of unbiased estimating equations, we obtain a one-step estimation procedure, and we show that this leads to a uniformly consistent estimator. To obtain further computational simplification, we propose a two-step estimation approach in which we estimate the coefficients on a series of time points first, and follow this with spline smoothing. We compare the two methods in terms of their asymptotic efficiency and computational complexity. We further develop inference tools to test the significance of the covariate effect on residual life. The finite sample performance of the estimation and testing procedures are further illustrated through numerical experiments. We also apply the methods to a data set from a neurological study. PMID:24478565

  14. A Method of Estimating FRM PM10 Sampler Performance Characteristics Using Particle Size Analysis and Collocated TSP and PM10 Samplers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In the US, regional air quality compliance with national ambient air quality standards (NAAQS) for PM10 is based on concentration measurements taken by federal reference method (FRM) PM10 samplers. The EPA specifies the performance characteristics of the FRM PM10 sampler by defining ranges for the p...

  15. Results of laser ranging collocations during 1983

    NASA Technical Reports Server (NTRS)

    Kolenkiewicz, R.

    1984-01-01

    The objective of laser ranging collocations is to compare the ability of two satellite laser ranging systems, located in the vicinity of one another, to measure the distance to an artificial Earth satellite in orbit over the sites. The similar measurement of this distance is essential before a new or modified laser system is deployed to worldwide locations in order to gather the data necessary to meet the scientific goals of the Crustal Dynamics Project. In order to be certain the laser systems are operating properly, they are periodically compared with each other. These comparisons or collocations are performed by locating the lasers side by side when they track the same satellite during the same time or pass. The data is then compared to make sure the lasers are giving essentially the same range results. Results of the three collocations performed during 1983 are given.

  16. White matter fiber tracking directed by interpolating splines and a methodological framework for evaluation.

    PubMed

    Losnegrd, Are; Lundervold, Arvid; Hodneland, Erlend

    2013-01-01

    Image-based tractography of white matter (WM) fiber bundles in the brain using diffusion weighted MRI (DW-MRI) has become a useful tool in basic and clinical neuroscience. However, proper tracking is challenging due to the anatomical complexity of fiber pathways, the coarse resolution of clinically applicable whole-brain in vivo imaging techniques, and the difficulties associated with verification. In this study we introduce a new tractography algorithm using splines (denoted Spline). Spline reconstructs smooth fiber trajectories iteratively, in contrast to most other tractography algorithms that create piecewise linear fiber tract segments, followed by spline fitting. Using DW-MRI recordings from eight healthy elderly people participating in a longitudinal study of cognitive aging, we compare our Spline algorithm to two state-of-the-art tracking methods from the TrackVis software suite. The comparison is done quantitatively using diffusion metrics (fractional anisotropy, FA), with both (1) tract averaging, (2) longitudinal linear mixed-effects model fitting, and (3) detailed along-tract analysis. Further validation is done on recordings from a diffusion hardware phantom, mimicking a coronal brain slice, with a known ground truth. Results from the longitudinal aging study showed high sensitivity of Spline tracking to individual aging patterns of mean FA when combined with linear mixed-effects modeling, moderately strong differences in the along-tract analysis of specific tracts, whereas the tract-averaged comparison using simple linear OLS regression revealed less differences between Spline and the two other tractography algorithms. In the brain phantom experiments with a ground truth, we demonstrated improved tracking ability of Spline compared to the two reference tractography algorithms being tested. PMID:23898264

  17. White matter fiber tracking directed by interpolating splines and a methodological framework for evaluation

    PubMed Central

    Losnegrd, Are; Lundervold, Arvid; Hodneland, Erlend

    2013-01-01

    Image-based tractography of white matter (WM) fiber bundles in the brain using diffusion weighted MRI (DW-MRI) has become a useful tool in basic and clinical neuroscience. However, proper tracking is challenging due to the anatomical complexity of fiber pathways, the coarse resolution of clinically applicable whole-brain in vivo imaging techniques, and the difficulties associated with verification. In this study we introduce a new tractography algorithm using splines (denoted Spline). Spline reconstructs smooth fiber trajectories iteratively, in contrast to most other tractography algorithms that create piecewise linear fiber tract segments, followed by spline fitting. Using DW-MRI recordings from eight healthy elderly people participating in a longitudinal study of cognitive aging, we compare our Spline algorithm to two state-of-the-art tracking methods from the TrackVis software suite. The comparison is done quantitatively using diffusion metrics (fractional anisotropy, FA), with both (1) tract averaging, (2) longitudinal linear mixed-effects model fitting, and (3) detailed along-tract analysis. Further validation is done on recordings from a diffusion hardware phantom, mimicking a coronal brain slice, with a known ground truth. Results from the longitudinal aging study showed high sensitivity of Spline tracking to individual aging patterns of mean FA when combined with linear mixed-effects modeling, moderately strong differences in the along-tract analysis of specific tracts, whereas the tract-averaged comparison using simple linear OLS regression revealed less differences between Spline and the two other tractography algorithms. In the brain phantom experiments with a ground truth, we demonstrated improved tracking ability of Spline compared to the two reference tractography algorithms being tested. PMID:23898264

  18. Smoothing spline analysis of variance (ANOVA) for tongue curve comparison

    NASA Astrophysics Data System (ADS)

    Davidson, Lisa

    2005-09-01

    Ultrasound imaging of the tongue is an increasingly common technique in speech production research. One persistent issue regarding ultrasound data is how to quantify them. Researchers often want to determine whether the tongue shape for an articulation under two different conditions (e.g., consonants in phrase-initial versus phrase-medial position) is the same or different. To address this issue, a method for comparing tongue curves using a smoothing spline ANOVA has been developed (SSANOVA) [Gu, 2002, Smoothing spline ANOVA models]. The SSANOVA is a technique for comparing curve shapes (splines) for two sets of data to determine whether there are significant differences between the curve types. Data sets contain 8-10 repetitions of each of the relevant tongue curves being compared. If the interaction term of the SSANOVA model is statistically significant, then the groups have different shapes. Since the interaction may be significant even if only a small section of the curves is different (i.e., the tongue root is the same, but the tip of one group is raised), Bayesian confidence intervals are used to determine which sections of the curves are statistically different. SSANOVAs are illustrated with some data comparing obstruents produced in word-final and word-medial coda position.

  19. Gauging the Effects of Exercises on Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart

    2014-01-01

    Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials…

  20. Knowledge of English Collocations: An Analysis of Taiwanese EFL Learners.

    ERIC Educational Resources Information Center

    Huang, Li-Szu

    This study investigated Taiwanese English as a Foreign Language (EFL) students knowledge of English collocations and the collocational errors they made. The subjects were 60 students from a college in Taiwan. The research instrument was a self-designed Simple Completion Test that measured students knowledge of four types of lexical collocations:

  1. Gauging the Effects of Exercises on Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart

    2014-01-01

    Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials

  2. Is "Absorb Knowledge" an Improper Collocation?

    ERIC Educational Resources Information Center

    Su, Yujie

    2010-01-01

    Collocation is practically very tough to Chinese English learners. The main reason lies in the fact that English and Chinese belong to two distinct language systems. And the deep reason is that learners tend to develop different metaphorical concept in accordance with distinct ways of thinking in Chinese. The paper, taking "absorb

  3. Achieving high data reduction with integral cubic B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.

    1993-01-01

    During geometry processing, tangent directions at the data points are frequently readily available from the computation process that generates the points. It is desirable to utilize this information to improve the accuracy of curve fitting and to improve data reduction. This paper presents a curve fitting method which utilizes both position and tangent direction data. This method produces G(exp 1) non-rational B-spline curves. From the examples, the method demonstrates very good data reduction rates while maintaining high accuracy in both position and tangent direction.

  4. C^1 spline wavelets on triangulations

    NASA Astrophysics Data System (ADS)

    Jia, Rong-Qing; Liu, Song-Tao

    2008-03-01

    In this paper we investigate spline wavelets on general triangulations. In particular, we are interested in C^1 wavelets generated from piecewise quadratic polynomials. By using the Powell-Sabin elements, we set up a nested family of spaces of C^1 quadratic splines, which are suitable for multiresolution analysis of Besov spaces. Consequently, we construct C^1 wavelet bases on general triangulations and give explicit expressions for the wavelets on the three-direction mesh. A general theory is developed so as to verify the global stability of these wavelets in Besov spaces. The wavelet bases constructed in this paper will be useful for numerical solutions of partial differential equations.

  5. Spline-Screw Multiple-Rotation Mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Mechanism functions like combined robotic gripper and nut runner. Spline-screw multiple-rotation mechanism related to spline-screw payload-fastening system described in (GSC-13454). Incorporated as subsystem in alternative version of system. Mechanism functions like combination of robotic gripper and nut runner; provides both secure grip and rotary actuation of other parts of system. Used in system in which no need to make or break electrical connections to payload during robotic installation or removal of payload. More complicated version needed to make and break electrical connections. Mechanism mounted in payload.

  6. A fully spectral collocation approximation for multi-dimensional fractional Schrdinger equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.; Abdelkawy, M. A.

    2015-08-01

    A shifted Legendre collocation method in two consecutive steps is developed and analyzed to numerically solve one- and two-dimensional time fractional Schrdinger equations (TFSEs) subject to initial-boundary and non-local conditions. The first step depends mainly on shifted Legendre Gauss-Lobatto collocation (SL-GL-C) method for spatial discretization; an expansion in a series of shifted Legendre polynomials for the approximate solution and its spatial derivatives occurring in the TFSE is investigated. In addition, the Legendre-Gauss-Lobatto quadrature rule is established to treat the nonlocal conservation conditions. Thereby, the expansion coefficients are then determined by reducing the TFSE with its nonlocal conditions to a system of fractional differential equations (SFDEs) for these coefficients. The second step is to propose a shifted Legendre Gauss-Radau collocation (SL-GR-C) scheme, for temporal discretization, to reduce such system into a system of algebraic equations which is far easier to be solved. The proposed collocation scheme, both in temporal and spatial discretizations, is successfully extended to solve the two-dimensional TFSE. Numerical results are carried out to confirm the spectral accuracy and efficiency of the proposed algorithms. By selecting relatively limited Legendre Gauss-Lobatto and Gauss-Radau collocation nodes, we are able to get very accurate approximations, demonstrating the utility and high accuracy of the new approach over other numerical methods.

  7. A space-time collocation scheme for modified anomalous subdiffusion and nonlinear superdiffusion equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.

    2016-01-01

    This paper reports a new spectral collocation technique for solving time-space modified anomalous subdiffusion equation with a nonlinear source term subject to Dirichlet and Neumann boundary conditions. This model equation governs the evolution for the probability density function that describes anomalously diffusing particles. Anomalous diffusion is ubiquitous in physical and biological systems where trapping and binding of particles can occur. A space-time Jacobi collocation scheme is investigated for solving such problem. The main advantage of the proposed scheme is that, the shifted Jacobi Gauss-Lobatto collocation and shifted Jacobi Gauss-Radau collocation approximations are employed for spatial and temporal discretizations, respectively. Thereby, the problem is successfully reduced to a system of algebraic equations. The numerical results obtained by this algorithm have been compared with various numerical methods in order to demonstrate the high accuracy and efficiency of the proposed method. Indeed, for relatively limited number of Gauss-Lobatto and Gauss-Radau collocation nodes imposed, the absolute error in our numerical solutions is sufficiently small. The results have been compared with other techniques in order to demonstrate the high accuracy and efficiency of the proposed method.

  8. Spline-based joint gravity and normal mode inversion

    NASA Astrophysics Data System (ADS)

    Michel, V.; Berkel, P.

    2009-12-01

    The determination of the mass density function of the Earth from gravity anomalies suffers from a serious non-uniqueness problem. In particular, structures below the Earth's crust cannot be determined from gravity data. However, normal mode data show a certain sensitivity with respect to structures in the Earth's mantle, although the data quality is essentially worse in comparison to gravity data. Therefore, an appropriate balancing of both data types is required for a combined inversion such that gravity data do not dominate the result too much. Moreover, the problem is ill-posed and a null space still remains but is reduced. A new spline based method as a further development of approaches that were already successfully applied to e.g. pure traveltime inversion or pure gravity inversion is presented for the described problem. The method is explained in a comprehensive way and some numerical results are presented and critically evaluated. [1] P. Berkel: Multiscale Methods for the Combined Inversion of Normal Mode and Gravity Variations, PhD thesis, submitted 2009. [2] P. Berkel, V. Michel: On mathematical aspects of a combined inversion of gravity and normal mode variations by a spline method, preprint, Schriften zur Funktionalanalysis und Geomathematik, 41 (2008). [3] V. Michel, A.S. Fokas: A unified approach to various techniques for the non-uniqueness of the inverse gravimetric problem and wavelet-based methods, Inverse Problems, 24 (2008), 045019 (25pp). [4] V. Michel, K. Wolf: Numerical aspects of a spline-based multiresolution recovery of the harmonic mass density out of gravity functionals, Geophysical Journal International, 173 (2008), 1-16 .

  9. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  10. Spline smoothing of histograms by linear programming

    NASA Technical Reports Server (NTRS)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  11. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  12. Trigonometric quadratic B-spline subdomain Galerkin algorithm for the Burgers' equation

    NASA Astrophysics Data System (ADS)

    Ay, Buket; Dag, Idris; Gorgulu, Melis Zorsahin

    2015-12-01

    A variant of the subdomain Galerkin method has been set up to find numerical solutions of the Burgers' equation. Approximate function consists of the combination of the trigonometric B-splines. Integration of Burgers' equation has been achived by aid of the subdomain Galerkin method based on the trigonometric B-splines as an approximate functions. The resulting first order ordinary differential system has been converted into an iterative algebraic equation by use of the Crank-Nicolson method at successive two time levels. The suggested algorithm is tested on somewell-known problems for the Burgers' equation.

  13. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  14. Applications of the spline filter for areal filtration

    NASA Astrophysics Data System (ADS)

    Tong, Mingsi; Zhang, Hao; Ott, Daniel; Chu, Wei; Song, John

    2015-12-01

    This paper proposes a general use isotropic areal spline filter. This new areal spline filter can achieve isotropy by approximating the transmission characteristic of the Gaussian filter. It can also eliminate the effect of void areas using a weighting factor, and resolve end-effect issues by applying new boundary conditions, which replace the first order finite difference in the traditional spline formulation. These improvements make the spline filter widely applicable to 3D surfaces and extend the applications of the spline filter in areal filtration.

  15. Spherical DCB-spline surfaces with hierarchical and adaptive knot insertion.

    PubMed

    Cao, Juan; Li, Xin; Chen, Zhonggui; Qin, Hong

    2012-08-01

    This paper develops a novel surface fitting scheme for automatically reconstructing a genus-0 object into a continuous parametric spline surface. A key contribution for making such a fitting method both practical and accurate is our spherical generalization of the Delaunay configuration B-spline (DCB-spline), a new non-tensor-product spline. In this framework, we efficiently compute Delaunay configurations on sphere by the union of two planar Delaunay configurations. Also, we develop a hierarchical and adaptive method that progressively improves the fitting quality by new knot-insertion strategies guided by surface geometry and fitting error. Within our framework, a genus-0 model can be converted to a single spherical spline representation whose root mean square error is tightly bounded within a user-specified tolerance. The reconstructed continuous representation has many attractive properties such as global smoothness and no auxiliary knots. We conduct several experiments to demonstrate the efficacy of our new approach for reverse engineering and shape modeling. PMID:21931175

  16. Robust regression of scattered data with adaptive spline-wavelets.

    PubMed

    Castao, Daniel; Kunoth, Angela

    2006-06-01

    A coarse-to-fine data fitting algorithm for irregularly spaced data based on boundary-adapted adaptive tensor-product semi-orthogonal spline-wavelets has been proposed in Castao and Kunoth, 2003. This method has been extended in Castao and Kunoth, 2005 to include regularization in terms of Sobolev and Besov norms. In this paper, we develop within this least-squares approach some statistical robust estimators to handle outliers in the data. Our wavelet scheme yields a numerically fast and reliable way to detect outliers. PMID:16764286

  17. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  18. The Effect of Taper Angle and Spline Geometry on the Initial Stability of Tapered, Splined Modular Titanium Stems.

    PubMed

    Pierson, Jeffery L; Small, Scott R; Rodriguez, Jose A; Kang, Michael N; Glassman, Andrew H

    2015-07-01

    Design parameters affecting initial mechanical stability of tapered, splined modular titanium stems (TSMTSs) are not well understood. Furthermore, there is considerable variability in contemporary designs. We asked if spline geometry and stem taper angle could be optimized in TSMTS to improve mechanical stability to resist axial subsidence and increase torsional stability. Initial stability was quantified with stems of varied taper angle and spline geometry implanted in a foam model replicating 2cm diaphyseal engagement. Increased taper angle and a broad spline geometry exhibited significantly greater axial stability (+21%-269%) than other design combinations. Neither taper angle nor spline geometry significantly altered initial torsional stability. PMID:25754255

  19. Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling

    NASA Astrophysics Data System (ADS)

    Wu, Xiaofang; Shi, Jianghong

    In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.

  20. Spline for blade grids design

    NASA Astrophysics Data System (ADS)

    Korshunov, Andrei; Shershnev, Vladimir; Korshunova, Ksenia

    2015-08-01

    Methods of designing blades grids of power machines, such as equal thickness shape built on middle-line arc, or methods based on target stress spreading were invented long time ago, well described and still in use. Science and technology has moved far from that time and laboriousness of experimental research, which were involving unique equipment, requires development of new robust and flexible methods of design, which will determine the optimal geometry of flow passage.This investigation provides simple and universal method of designing blades, which, in comparison to the currently used methods, requires significantly less input data but still provides accurate results. The described method is purely analytical for both concave and convex sides of the blade, and therefore lets to describe the curve behavior down the flow path at any point. Compared with the blade grid designs currently used in industry, geometric parameters of the designs constructed with this method show the maximum deviation below 0.4%.

  1. Optimized Construction of Biorthogonal Spline-Wavelets

    NASA Astrophysics Data System (ADS)

    ?ern, Dana; Fin?k, Vclav

    2008-09-01

    The paper is concerned with the construction of wavelet bases on the interval derived from B-splines. The resulting bases generate multiresolution analyses on the unit interval with the desired number of vanishing wavelet moments for primal and dual wavelets. Inner wavelets are translated and dilated versions of well-known wavelets designed by Cohen, Daubechies, Feauveau [5] while the construction of boundary wavelets is along the lines of [6]. The disadvantage of popular bases from [6] is their bad condition which cause problems in practical applications. Some modifications which lead to better conditioned bases were proposed in [1, 7, 8, 9, 10]. In this contribution, we further improve the condition of spline-wavelet bases on the interval. Quantitative properties of these bases are presented.

  2. Spline-Screw Payload-Fastening System

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Payload handed off securely between robot and vehicle or structure. Spline-screw payload-fastening system includes mating female and male connector mechanisms. Clockwise (or counter-clockwise) rotation of splined male driver on robotic end effector causes connection between robot and payload to tighten (or loosen) and simultaneously causes connection between payload and structure to loosen (or tighten). Includes mechanisms like those described in "Tool-Changing Mechanism for Robot" (GSC-13435) and "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430). Designed for use in outer space, also useful on Earth in applications needed for secure handling and secure mounting of equipment modules during storage, transport, and/or operation. Particularly useful in machine or robotic applications.

  3. Modeling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines

    NASA Astrophysics Data System (ADS)

    Kammann, P.

    2005-12-01

    Our intention is the modeling of seismic wave propagation from displacement measurements by seismographs at the Earth's surface. The elastic behaviour of the Earth is usually described by the Cauchy-Navier equation. A system of fundamental solutions for the Fourier transformed Cauchy-Navier equation are the Hansen vectors L, M and N. We apply an inverse Fourier transform to obtain an orthonormal function system depending on time and space. By means of this system we construct certain splines, which are then used for interpolating the given data. Compared to polynomial interpolation, splines have the advantage that they minimize some curvature measure and are, therefore, smoother. First, we test this method on a synthetic wave function. Afterwards, we apply it to realistic earthquake data. (P. Kammann, Modelling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines, Diploma Thesis, Geomathematics Group, Department of Mathematics, University of Kaiserslautern, 2005)

  4. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  5. High-quality rendering of quartic spline surfaces on the GPU.

    PubMed

    Reis, Gerd; Zeilfelder, Frank; Hering-Bertram, Martin; Farin, Gerald; Hagen, Hans

    2008-01-01

    We present a novel GPU-based algorithm for high-quality rendering of bivariate spline surfaces. An essential difference to the known methods for rendering graph surfaces is that we use quartic smooth splines on triangulations rather than triangular meshes. Our rendering approach is direct in the sense that since we do not use an intermediate tessellation but rather compute ray-surface intersections (by solving quartic equations numerically) as well as surface normals (by using Bernstein-Bzier techniques) for Phong illumination on the GPU. Inaccurate shading and artifacts appearing for triangular tesselated surfaces are completely avoided. Level of detail is automatic since all computations are done on a per fragment basis. We compare three different (quasi-) interpolating schemes for uniformly sampled gridded data, which differ in the smoothness and the approximation properties of the splines. The results show that our hardware based renderer leads to visualizations (including texturing, multiple light sources, environment mapping, etc.) of highest quality. PMID:18599922

  6. B-spline design of digital FIR filter using evolutionary computation techniques

    NASA Astrophysics Data System (ADS)

    Swain, Manorama; Panda, Rutuparna

    2011-10-01

    In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.

  7. Marginal longitudinal semiparametric regression via penalized splines.

    PubMed

    Kadiri, M Al; Carroll, R J; Wand, M P

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models. PMID:21037941

  8. N-dimensional B-spline surface estimated by lofting for locally improving IRI

    NASA Astrophysics Data System (ADS)

    Koch, K.; Schmidt, M.

    2011-03-01

    N-dimensional surfaces are defined by the tensor product of B-spline basis functions. To estimate the unknown control points of these B-spline surfaces, the lofting method also called skinning method by cross-sectional curve fits is applied. It is shown by an analytical proof and numerically confirmed by the example of a four-dimensional surface that the results of the lofting method agree with the ones of the simultaneous estimation of the unknown control points. The numerical complexity for estimating vn control points by the lofting method is O(vn+1) while it results in O(v3n) for the simultaneous estimation. It is also shown that a B-spline surface estimated by a simultaneous estimation can be extended to higher dimensions by the lofting method, thus saving computer time. An application of this method is the local improvement of the International Reference Ionosphere (IRI), e.g. by the slant total electron content (STEC) obtained by dual-frequency observations of the Global Navigation Satellite System (GNSS). Three-dimensional B-spline surfaces at different time epochs have to be determined by the simultaneous estimation of the control points for this improvement. A four-dimensional representation in space and time of the electron density of the ionosphere is desirable. It can be obtained by the lofting method. This takes less computer time than determining the four-dimensional surface solely by a simultaneous estimation.

  9. Incorporating Corpus Technology to Facilitate Learning of English Collocations in a Thai University EFL Writing Course

    ERIC Educational Resources Information Center

    Chatpunnarangsee, Kwanjira

    2013-01-01

    The purpose of this study is to explore ways of incorporating web-based concordancers for the purpose of teaching English collocations. A mixed-methods design utilizing a case study strategy was employed to uncover four specific dimensions of corpus use by twenty-four students in two classroom sections of a writing course at a university in

  10. On the spline-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.

  11. Development and flight tests of vortex-attenuating splines

    NASA Technical Reports Server (NTRS)

    Hastings, E. C., Jr.; Patterson, J. C., Jr.; Shanks, R. E.; Champine, R. A.; Copeland, W. L.; Young, D. C.

    1975-01-01

    The ground tests and full-scale flight tests conducted during development of the vortex-attenuating spline are described. The flight tests were conducted using a vortex generating aircraft with and without splines; a second aircraft was used to probe the vortices generated in both cases. The results showed that splines significantly reduced the vortex effects, but resulted in some noise and climb performance penalties on the generating aircraft.

  12. High-speed implementation of nonuniform rational B-splines

    NASA Astrophysics Data System (ADS)

    Silbermann, Martine J.

    1990-08-01

    Nonuniform, rational B-splines (NUItBs) are the basis functions used to represent both free-form curves and surfaces and precise quadric primitives such as conics. NURBs are defined as ratios of linear combinations of nonuniform B-spline functions. In this paper, we present a new high-speed algorithm for the computation of nonuniform rational spline curves. We demonstrate this technique on a simple example and provide a computational complexity analysis.

  13. Classification by means of B-spline potential functions with applications to remote sensing

    NASA Technical Reports Server (NTRS)

    Bennett, J. O.; Defigueiredo, R. J. P.; Thompson, J. R.

    1974-01-01

    A method is presented for using B-splines as potential functions in the estimation of likelihood functions (probability density functions conditioned on pattern classes), or the resulting discriminant functions. The consistency of this technique is discussed. Experimental results of using the likelihood functions in the classification of remotely sensed data are given.

  14. Construction of Wavelets Basis for the Fibonacci Chain via the Spline Functions of Various Orders

    NASA Astrophysics Data System (ADS)

    Andrle, Miroslav

    2002-06-01

    We present here wavelets of class Cn(R) living on a sequence of aperiodic discretization of R, known as the Fibonacci chain, constructed via the splines functions. The construction method is based on an algorithm published by G. Bernau. Corresponding multiresolution analysis is defined and numerical examples of linear scaling functions and wavelets are shown.

  15. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  16. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    SciTech Connect

    Ruberti, M.; Averbukh, V.; Decleva, P.

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.

  17. Image data compression with three-directional splines

    NASA Astrophysics Data System (ADS)

    Charina, M.; Conti, Costanza; Jetter, Kurt

    2000-12-01

    In this paper we consider a new method for image data compression. It is based on three-directional spline functions of low degree, viz. Piecewise constant functions, and piecewise cubic C-1-functions. In the first case, a Haar wavelet type decomposition can be derived, and combined with standard thresholding techniques. In the second case, due to the fact that a splice basis is given by convolution products, the wavelet decomposition and thresholding can be computed on one factor of the convolution product only. Performance of the proposed method is discussed in section 3 where the reconstructed pictures are compared with the ones produced by the analogous decomposition methods provided by the MATLAB wavelet toolbox.

  18. The Effect of Grouping and Presenting Collocations on Retention

    ERIC Educational Resources Information Center

    Akpinar, Kadriye Dilek; Bardaki, Mehmet

    2015-01-01

    The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types

  19. A Study of Strategy Use in Producing Lexical Collocations.

    ERIC Educational Resources Information Center

    Liu, Candi Chen-Pin

    This study examined strategy use in producing lexical collocations among freshman English majors at the Chinese Culture University. Divided into two groups by English writing proficiency, students completed three tasks: a collocation test, an optimal revision task, and a task-based structured questionnaire regarding their actions and mental

  20. Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks

    ERIC Educational Resources Information Center

    Menon, Sujatha; Mukundan, Jayakaran

    2012-01-01

    This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based

  1. The Repetition of Collocations in EFL Textbooks: A Corpus Study

    ERIC Educational Resources Information Center

    Wang, Jui-hsin Teresa; Good, Robert L.

    2007-01-01

    The importance of repetition in the acquisition of lexical items has been widely acknowledged in single-word vocabulary research but has been relatively neglected in collocation studies. Since collocations are considered one key to achieving language fluency, and because learners spend a great amount of time interacting with their textbooks, the…

  2. An Algebraic Spline Model of Molecular Surfaces for Energetic Computations

    PubMed Central

    Zhao, Wenqi; Bajaj, Chandrajit; Xu, Guoliang

    2009-01-01

    In this paper, we describe a new method to generate a smooth algebraic spline (AS) approximation of the molecular surface (MS) based on an initial coarse triangulation derived from the atomic coordinate information of the biomolecule, resident in the PDB (Protein data bank). Our method first constructs a triangular prism scaffold covering the PDB structure, and then generates a piecewise polynomial F on the Bernstein-Bezier (BB) basis within the scaffold. An ASMS model of the molecular surface is extracted as the zero contours of F which is nearly C1 and has dual implicit and parametric representations. The dual representations allow us easily do the point sampling on the ASMS model and apply it to the accurate estimation of the integrals involved in the electrostatic solvation energy computations. Meanwhile comparing with the trivial piecewise linear surface model, fewer number of sampling points are needed for the ASMS, which effectively reduces the complexity of the energy estimation. PMID:21519111

  3. An algebraic spline model of molecular surfaces for energetic computations.

    PubMed

    Zhao, Wenqi; Xu, Guoliang; Bajaj, Chandrajit

    2011-01-01

    In this paper, we describe a new method to generate a smooth algebraic spline (AS) approximation of the molecular surface (MS) based on an initial coarse triangulation derived from the atomic coordinate information of the biomolecule, resident in the Protein data bank (PDB). Our method first constructs a triangular prism scaffold covering the PDB structure, and then generates a piecewise polynomial F on the Bernstein-Bezier (BB) basis within the scaffold. An ASMS model of the molecular surface is extracted as the zero contours of F, which is nearly C1 and has dual implicit and parametric representations. The dual representations allow us easily do the point sampling on the ASMS model and apply it to the accurate estimation of the integrals involved in the electrostatic solvation energy computations. Meanwhile comparing with the trivial piecewise linear surface model, fewer number of sampling points are needed for the ASMS, which effectively reduces the complexity of the energy estimation. PMID:21519111

  4. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    NASA Astrophysics Data System (ADS)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and motors to produce combined arc and straight-line motion. The simulation and experiment show interesting results by demonstrating smooth motion in both acceleration and jerk and significant improvements of positioning accuracy in trajectory planning.

  5. Wavelets based on splines: an application

    NASA Astrophysics Data System (ADS)

    Srinivasan, Pramila; Jamieson, Leah H.

    1996-10-01

    In this paper, we describe the theory and implementation of a variable rate speech coder using the cubic spline wavelet decomposition. In the discrete time wavelet extrema representation, Cvetkovic, et. al. implement an iterative projection algorithm to reconstruct the wavelet decomposition from the extrema representation. Based on this model, prior to this work, we have described a technique for speech coding using the extrema representation which suggests that the non-decimated extrema representation allows us to exploit the pitch redundancy in speech. A drawback of the above scheme is the audible perceptual distortion due to the iterative algorithm which fails to converge on some speech frames. This paper attempts to alleviate the problem by showing that for a particular class of wavelets that implements the ladder of spaces consisting of the splines, the iterative algorithm can be replaced by an interpolation procedure. Conditions under which the interpolation reconstructs the transform exactly are identified. One of the advantages of the extrema representation is the 'denoising' effect. A least squares technique to reconstruct the signal is constructed. The effectiveness of the scheme in reproducing significant details of the speech signal is illustrated using an example.

  6. A B-spline Hartree-Fock program

    NASA Astrophysics Data System (ADS)

    Froese Fischer, Charlotte

    2011-06-01

    A B-spline version of a Hartree-Fock program is described. The usual differential equations are replaced by systems of non-linear equations and generalized eigenvalue problems of the form (H-?B)P=0, where a designates the orbital. When orbital a is required to be orthogonal to a fixed orbital, this form assumes that a projection operator has been applied to eliminate the Lagrange multiplier. When two orthogonal orbitals are both varied, the energy must also be stationary with respect to orthogonal transformations. At such a stationary point, the matrix of Lagrange multipliers, ?=(P|H|P), is symmetric and the off-diagonal Lagrange multipliers may again be eliminated through projection operators. For multiply occupied shells, convergence problems are avoided by the use of a single-orbital Newton-Raphson method. A self-consistent field procedure based on these two possibilities exhibits excellent convergence. A Newton-Raphson method for updating all orbitals simultaneously has better numerical properties and a more rapid rate of convergence but requires more computer processing time. Both ground and excited states may be computed using a default universal grid. Output from a calculation for Al 3s3pP2 shows the improvement in accuracy that can be achieved by mapping results from low-order splines on a coarse grid to splines of higher order onto a refined grid. The program distribution contains output from additional test cases. Program summaryProgram title: SPHF version 1.00 Catalogue identifier: AEIJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 925 No. of bytes in distributed program, including test data, etc.: 714 254 Distribution format: tar.gz Programming language: Fortran 95 Computer: Any system with a Fortran 95 compiler. Tested on Intel Xeon CPU X5355, 2.66 GHz Operating system: Any system with a Fortran 95 compiler Classification: 2.1 External routines: LAPACK ( http://www.netlib.org/lapack/) Nature of problem: Non-relativistic Hartree-Fock wavefunctions are determined for atoms in a bound state that may be used to predict a variety atomic properties. Solution method: The radial functions are expanded in a B-spline basis [1]. The variational principle applied to an energy functional that includes Lagrange multipliers for orthonormal constraints defines the Hartree-Fock matrix for each orbital. Orthogonal transformations symmetrize the matrix of Lagrange multipliers and projection operators eliminate the off-diagonal Lagrange multipliers to yield a generalized eigenvalue problem. For multiply occupied shells, a single-orbital Newton-Raphson (NR) method is used to speed convergence with very little extra computation effort. In a final step, all orbitals are updated simultaneously by a Newton-Raphson method to improve numerical accuracy. Restrictions: There is no restriction on calculations for the average energy of a configuration. As in the earlier HF96 program [2], only one or two open shells are allowed when results are required for a specific LS coupling. These include: (ns, where l=0,1,2,3 (nl, where l=0,1,2,3, (nd)(nf) Unusual features: Unlike HF96, the present program is a Fortran 90/95 program without the use of COMMON. It is assumed that Lapack libraries are available. Running time: For Ac 7s7pP2 the execution time varied from 6.9 s to 9.1 s depending on the iteration method.

  7. Color management with a hammer: the B-spline fitter

    NASA Astrophysics Data System (ADS)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  8. Analysis of the boundary conditions of the spline filter

    NASA Astrophysics Data System (ADS)

    Tong, Mingsi; Zhang, Hao; Ott, Daniel; Zhao, Xuezeng; Song, John

    2015-09-01

    The spline filter is a standard linear profile filter recommended by ISO/TS 16610-22 (2006). The main advantage of the spline filter is that no end-effects occur as a result of the filter. The ISO standard also provides the tension parameter β =0.625 24 to make the transmission characteristic of the spline filter approximately similar to the Gaussian filter. However, when the tension parameter β is not zero, end-effects appear. To resolve this problem, we analyze 14 different combinations of boundary conditions of the spline filter and propose a set of new boundary conditions in this paper. The new boundary conditions can provide satisfactory end portions of the output form without end-effects for the spline filter while still maintaining the value of β =0.625 24 .

  9. A mixed basis density functional approach for low dimensional systems with B-splines

    NASA Astrophysics Data System (ADS)

    Ren, Chung-Yuan; Hsue, Chen-Shiung; Chang, Yia-Chung

    2015-03-01

    A mixed basis approach based on density functional theory is employed for low dimensional systems. The basis functions are taken to be plane waves for the periodic direction multiplied by B-spline polynomials in the non-periodic direction. B-splines have the following advantages: (1) the associated matrix elements are sparse, (2) B-splines possess a superior treatment of derivatives, (3) B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With this mixed basis set we can directly calculate the total energy of the system instead of using the conventional supercell model with a slab sandwiched between vacuum regions. A generalized Lanczos-Krylov iterative method is implemented for the diagonalization of the Hamiltonian matrix. To demonstrate the present approach, we apply it to study the C(001)-(21) surface with the norm-conserving pseudopotential, the n-type ?-doped graphene, and graphene nanoribbon with Vanderbilt's ultra-soft pseudopotentials. All the resulting electronic structures were found to be in good agreement with those obtained by the VASP code, but with a reduced number of basis.

  10. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ?-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  11. Efficient Shape Priors for Spline-Based Snakes.

    PubMed

    Delgado-Gonzalo, Ricard; Schmitter, Daniel; Uhlmann, Virginie; Unser, Michael

    2015-11-01

    Parametric active contours are an attractive approach for image segmentation, thanks to their computational efficiency. They are driven by application-dependent energies that reflect the prior knowledge on the object to be segmented. We propose an energy involving shape priors acting in a regularization-like manner. Thereby, the shape of the snake is orthogonally projected onto the space that spans the affine transformations of a given shape prior. The formulation of the curves is continuous, which provides computational benefits when compared with landmark-based (discrete) methods. We show that this approach improves the robustness and quality of spline-based segmentation algorithms, while its computational overhead is negligible. An interactive and ready-to-use implementation of the proposed algorithm is available and was successfully tested on real data in order to segment Drosophila flies and yeast cells in microscopic images. PMID:26353353

  12. Locally Refined Splines Representation for Geospatial Big Data

    NASA Astrophysics Data System (ADS)

    Dokken, T.; Skytt, V.; Barrowclough, O.

    2015-08-01

    When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines) allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.

  13. Nonlinear registration using B-spline feature approximation and image similarity

    NASA Astrophysics Data System (ADS)

    Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-07-01

    The warping methods are broadly classified into the image-matching method based on similar pixel intensity distribution and the feature-matching method using distinct anatomical feature. Feature based methods may fail to match local variation of two images. However, the method globally matches features well. False matches corresponding to local minima of the underlying energy functions can be obtained through the similarity based methods. To avoid local minima problem, we proposes non-linear deformable registration method utilizing global information of feature matching and the local information of image matching. To define the feature, gray matter and white matter of brain tissue are segmented by Fuzzy C-Mean (FCM) algorithm. B-spline approximation technique is used for feature matching. We use a multi-resolution B-spline approximation method which modifies multilevel B-spline interpolation method. It locally changes the resolution of the control lattice in proportion to the distance between features of two images. Mutual information is used for similarity measure. The deformation fields are locally refined until maximize the similarity. In two 3D T1 weighted MRI test, this method maintained the accuracy by conventional image matching methods without the local minimum problem.

  14. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    SciTech Connect

    Yankov, A.; Downar, T.

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  15. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  16. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    SciTech Connect

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  17. Mathematical Morphology and P-Splines Fitting: A Robust Combination for the Fully Automated Fluorescence Background Removal and Shot Noise Filtering in Raman Spectroscopy Applied to Pigments Analysis

    NASA Astrophysics Data System (ADS)

    Gonzlez-Vidal, J. J.; Prez-Pueyo, R.; Soneira, M. J.; Ruiz-Moreno, S.

    2014-06-01

    A new method has been developed for denoising a Raman spectrum using mathematical morphology combined with P-splines fitting, which requires no user input. It was applied to spectra measured on art works, resolving successfully the Raman information.

  18. Spline-locking screw fastening strategy

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1992-01-01

    A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotics or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced space manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.

  19. Spline-Locking Screw Fastening Strategy (SLSFS)

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1991-01-01

    A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotic or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.

  20. The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners

    ERIC Educational Resources Information Center

    Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd

    2011-01-01

    An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge

  1. Developing and Evaluating a Web-Based Collocation Retrieval Tool for EFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Hao-Jan Howard

    2011-01-01

    The development of adequate collocational knowledge is important for foreign language learners; nonetheless, learners often have difficulties in producing proper collocations in the target language. Among the various ways of learning collocations, the DDL (data-driven learning) approach encourages independent learning of collocations and allows

  2. Numerical solution of Riccati equation using the cubic B-spline scaling functions and Chebyshev cardinal functions

    NASA Astrophysics Data System (ADS)

    Lakestani, Mehrdad; Dehghan, Mehdi

    2010-05-01

    Two numerical techniques are presented for solving the solution of Riccati differential equation. These methods use the cubic B-spline scaling functions and Chebyshev cardinal functions. The methods consist of expanding the required approximate solution as the elements of cubic B-spline scaling function or Chebyshev cardinal functions. Using the operational matrix of derivative, we reduce the problem to a set of algebraic equations. Some numerical examples are included to demonstrate the validity and applicability of the new techniques. The methods are easy to implement and produce very accurate results.

  3. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  4. Usability Study of Two Collocated Prototype System Displays

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2007-01-01

    Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.

  5. Numerical solution of fourth order boundary value problem using sixth degree spline functions

    NASA Astrophysics Data System (ADS)

    Kalyani, P.; Madhusudhan Rao, A. S.; Rao, P. S. Rama Chandra

    2015-12-01

    In this communication, we developed sixth degree spline functions by using Bickley's method for obtaining the numerical solution of linear fourth order differential equations of the form y(4)(x)+f(x)y(x) = r(x) with the given boundary conditions where f(x) and r(x) are given functions. Numerical illustrations are tabulated to demonstrate the practical usefulness of method.

  6. A Simple and Fast Spline Filtering Algorithm for Surface Metrology

    PubMed Central

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement.

  7. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts.

    PubMed

    Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio J D; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2013-10-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. PMID:24108269

  8. Subcell resolution in simplex stochastic collocation for spatial discontinuities

    SciTech Connect

    Witteveen, Jeroen A.S.; Iaccarino, Gianluca

    2013-10-15

    Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC–SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers’ equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC–SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.

  9. Single-grid spectral collocation for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte

    1988-01-01

    The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.

  10. Classifier calibration using splined empirical probabilities in clinical risk prediction.

    PubMed

    Gaudoin, Ren; Montana, Giovanni; Jones, Simon; Aylin, Paul; Bottle, Alex

    2015-06-01

    The aims of supervised machine learning (ML) applications fall into three broad categories: classification, ranking, and calibration/probability estimation. Many ML methods and evaluation techniques relate to the first two. Nevertheless, there are many applications where having an accurate probability estimate is of great importance. Deriving accurate probabilities from the output of a ML method is therefore an active area of research, resulting in several methods to turn a ranking into class probability estimates. In this manuscript we present a method, splined empirical probabilities, based on the receiver operating characteristic (ROC) to complement existing algorithms such as isotonic regression. Unlike most other methods it works with a cumulative quantity, the ROC curve, and as such can be tagged onto an ROC analysis with minor effort. On a diverse set of measures of the quality of probability estimates (Hosmer-Lemeshow, Kullback-Leibler divergence, differences in the cumulative distribution function) using simulated and real health care data, our approach compares favourably with the standard calibration method, the pool adjacent violators algorithm used to perform isotonic regression. PMID:24557734

  11. Recent researches on multivariate spline and piecewise algebraic variety

    NASA Astrophysics Data System (ADS)

    Wang, Ren-Hong

    2008-11-01

    The multivariate splines as piecewise polynomials have become useful tools for dealing with Computational Geometry, Computer Graphics, Computer Aided Geometrical Design and Image Processing. It is well known that the classical algebraic variety in algebraic geometry is to study geometrical properties of the common intersection of surfaces represented by multivariate polynomials. Recently the surfaces are mainly represented by multivariate piecewise polynomials (i.e. multivariate splines), so the piecewise algebraic variety defined as the common intersection of surfaces represented by multivariate splines is a new topic in algebraic geometry. Moreover, the piecewise algebraic variety will be also important in computational geometry, computer graphics, computer aided geometrical design and image processing. The purpose of this paper is to introduce some recent researches on multivariate spline, piecewise algebraic variety (curve), and their applications.

  12. Construction and properties of non-polynomial spline-curves

    NASA Astrophysics Data System (ADS)

    Laks, Arne

    2014-12-01

    Bzier curves can be expressed by using De Casteljau's corner cutting algorithm. This can also be formulated using factorization matrices. Each matrix reduce the coefficient vector with 1, and each line in each matrix represent a linear interpolation describing the corner cutting. This matrix formulation may also be used for B-splines. One just have to introduce a linear transformation from the local domains of the basis function, to the interval [0;1]. This leads to a further expansion where the linear transformation is "deformed" by a perturbation function. The result is a non-polynomial spline. We will se that the typical properties of B-splines are preserved or even improved in the nonpolynomial case. Further, we describes the construction of the B-spline, and provide some practical examples.

  13. Construction of spline functions in spreadsheets to smooth experimental data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A previous manuscript detailed how spreadsheet software can be programmed to smooth experimental data via cubic splines. This addendum corrects a few errors in the previous manuscript and provides additional necessary programming steps. ...

  14. High Accuracy Spline Explicit Group (SEG) Approximation for Two Dimensional Elliptic Boundary Value Problems

    PubMed Central

    Goh, Joan; Hj. M. Ali, Norhashidah

    2015-01-01

    Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG) iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method. PMID:26182211

  15. High Accuracy Spline Explicit Group (SEG) Approximation for Two Dimensional Elliptic Boundary Value Problems.

    PubMed

    Goh, Joan; Hj M Ali, Norhashidah

    2015-01-01

    Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG) iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method. PMID:26182211

  16. Application of cubic splines to the spectral analysis of unequally spaced data

    NASA Astrophysics Data System (ADS)

    Akerlof, C.; Alcock, C.; Allsman, R.; Axelrod, T.; Bennett, D. P.; Cook, K. H.; Freeman, K.; Griest, K.; Marshall, S.; Park, H.-S.; Perlmutter, S.; Peterson, B.; Quinn, P.; Reimann, J.; Rodgers, A.; Stubbs, C. W.; Sutherland, W.

    1994-12-01

    In the absence of a priori information, nonparametric statistical techniques are often useful in exploring the structure of data. A least-squares fitting program, based on cubic B-splines has been developed to analyze the periodicity of variable star light curves. This technique takes advantage of the limited domain within which a particular B-spline is nonzero to substantially reduce the number of calculations needed to generate the regression matrix. By using simple approximations adapted to modern computer workstations, the computational speed is competitive with most other common methods that have been described in the literature. Since the number of arithmetic operations increases as N2, where N is the number of data points, this method cannot compete with the fast Fourier transformations (FFT) modification of the Lomb-Scargle algorithm. However, for data sets with N less than 104, it should be quite useful. Examples are shown, taken from the Massive Compact Halo Object (MACHO) experiment.

  17. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  18. Continuous Groundwater Monitoring Collocated at USGS Streamgages

    NASA Astrophysics Data System (ADS)

    Constantz, J. E.; Eddy-Miller, C.; Caldwell, R.; Wheeer, J.; Barlow, J.

    2012-12-01

    USGS Office of Groundwater funded a 2-year pilot study collocating groundwater wells for monitoring water level and temperature at several existing continuous streamgages in Montana and Wyoming, while U.S. Army Corps of Engineers funded enhancement to streamgages in Mississippi. To increase spatial relevance with in a given watershed, study sites were selected where near-stream groundwater was in connection with an appreciable aquifer, and where logistics and cost of well installations were considered representative. After each well installation and surveying, groundwater level and temperature were easily either radio-transmitted or hardwired to existing data acquisition system located in streamgaging shelter. Since USGS field personnel regularly visit streamgages during routine streamflow measurements and streamgage maintenance, the close proximity of observation wells resulted in minimum extra time to verify electronically transmitted measurements. After field protocol was tuned, stream and nearby groundwater information were concurrently acquired at streamgages and transmitted to satellite from seven pilot-study sites extending over nearly 2,000 miles (3,200 km) of the central US from October 2009 until October 2011, for evaluating the scientific and engineering add-on value of the enhanced streamgage design. Examination of the four-parameter transmission from the seven pilot study groundwater gaging stations reveals an internally consistent, dynamic data suite of continuous groundwater elevation and temperature in tandem with ongoing stream stage and temperature data. Qualitatively, the graphical information provides appreciation of seasonal trends in stream exchanges with shallow groundwater, as well as thermal issues of concern for topics ranging from ice hazards to suitability of fish refusia, while quantitatively this information provides a means for estimating flux exchanges through the streambed via heat-based inverse-type groundwater modeling. In June USGS Fact Sheet 2012-3054 was released online, summarizing the results of the pilot project.

  19. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  20. Cubic Spline Wavelets Satisfying Homogeneous Boundary Conditions for Fourth-Order Problems

    NASA Astrophysics Data System (ADS)

    ?ern, Dana; Fin?k, Vclav

    2009-09-01

    The paper is concerned with a construction of cubic spline wavelet bases on the interval which are adapted to homogeneous Dirichlet boundary conditions for fourth-order problems. The resulting bases generate multiresolution analyses on the unit interval with the desired number of vanishing wavelet moments. Inner wavelets are translated and dilated versions of well-known wavelets designed by Cohen, Daubechies, and Feauveau. The construction of boundary scaling functions and wavelets is a delicate task, because they may significantly worsen conditions of resulting bases as well as condition numbers of corresponding stiffness matrices. We present quantitative properties of the constructed bases and we show superiority of our construction in comparison to some other known spline wavelet bases in an adaptive wavelet method for the partial differential equation with the biharmonic operator.

  1. B-spline goal-oriented error estimators for geometrically nonlinear rods

    NASA Astrophysics Data System (ADS)

    Dedè, L.; Santos, H. A. F. A.

    2012-01-01

    We consider goal-oriented a posteriori error estimators for the evaluation of the errors on quantities of interest associated with the solution of geometrically nonlinear curved elastic rods. For the numerical solution of these nonlinear one-dimensional problems, we adopt a B-spline based Galerkin method, a particular case of the more general isogeometric analysis. We propose error estimators using higher order "enhanced" solutions, which are based on the concept of enrichment of the original B-spline basis by means of the "pure" k-refinement procedure typical of isogeometric analysis. We provide several numerical examples for linear and nonlinear output functionals, corresponding to the rotation, displacements and strain energy of the rod, and we compare the effectiveness of the proposed error estimators.

  2. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    SciTech Connect

    Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  3. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2015-03-01

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov's proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  4. Collocation and Pattern Recognition Effects on System Failure Remediation

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Press, Hayes N.

    2007-01-01

    Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.

  5. On a construction of a hierarchy of best linear spline approximations using a finite element approach.

    PubMed

    Wiley, David F; Bertram, Martin; Hamann, Bernd

    2004-01-01

    We present a method for the hierarchical approximation of functions in one, two, or three variables based on the finite element method (Ritz approximation). Starting with a set of data sites with associated function, we first determine a smooth (scattered-data) interpolant. Next, we construct an initial triangulation by triangulating the region bounded by the minimal subset of data sites defining the convex hull of all sites. We insert only original data sites, thus reducing storage requirements. For each triangulation, we solve a minimization problem: computing the best linear spline approximation of the interpolant of all data, based on a functional involving function values and first derivatives. The error of a best linear spline approximation is computed in a Sobolev-like norm, leading to element-specific error values. We use these interval/triangle/tetrahedron-specific values to identify the element to subdivide next. The subdivision of an element with largest error value requires the recomputation of all spline coefficients due to the global nature of the problem. We improve efficiency by 1) subdividing multiple elements simultaneously and 2) by using a sparse-matrix representation and system solver. PMID:15794137

  6. Noise correction on LANDSAT images using a spline-like algorithm

    NASA Technical Reports Server (NTRS)

    Vijaykumar, N. L. (principal investigator); Dias, L. A. V.

    1985-01-01

    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.

  7. AnL1 smoothing spline algorithm with cross validation

    NASA Astrophysics Data System (ADS)

    Bosworth, Ken W.; Lall, Upmanu

    1993-08-01

    We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +?i, iD1,...,N with {itti}iD1N ?D, the?i are errors withE(?i)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameter??;0, is defined as the solution,s?, of the optimization problem (1/N)?iD1N yi-g(ti +?JM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,s?, would be expected to give robust smoothed estimates off in situations where the?i are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computings? is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(?) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in ??0 taken on the above SQPs parametrized in?, with the optimal smoothing parameter taken to be that value of? at which theCV(?) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.

  8. An accurate and robust finite volume scheme based on the spline interpolation for solving the Euler and Navier-Stokes equations on non-uniform curvilinear grids

    NASA Astrophysics Data System (ADS)

    Wang, Qiuju; Ren, Yu-Xin

    2015-03-01

    Spline schemes are proposed to simulate compressible flows on non-uniform structured grid in the framework of finite volume methods. The cubic spline schemes in the present paper can achieve fourth and third order accuracy on the uniform and non-uniform grids respectively. Due to the continuity of cubic spline polynomial function, the inviscid flux can be computed directly from the reconstructed spline polynomial without using the Riemann solvers or other flux splitting techniques. Isotropic and anisotropic artificial viscosity models are introduced to damp high frequency numerical disturbances and to enhance the numerical stability. The first derivatives that are used to calculate the viscous flux are directly obtained from the cubic spline polynomials and preserve second order accuracy on both uniform and non-uniform grids. A hybrid scheme, in which the spline scheme is blended with shock-capturing WENO scheme, is developed to deal with flow discontinuities. Benchmark test cases of inviscid/viscous flows are presented to demonstrate the accuracy, robustness and efficiency of the proposed schemes.

  9. Polyharmonic smoothing splines and the multidimensional Wiener filtering of fractal-like signals.

    PubMed

    Tirosh, Shai; Van De Ville, Dimitri; Unser, Michael

    2006-09-01

    Motivated by the fractal-like behavior of natural images, we develop a smoothing technique that uses a regularization functional which is a fractional iterate of the Laplacian. This type of functional was initially introduced by Duchon for the approximation of nonuniformily sampled, multidimensional data. He proved that the general solution is a smoothing spline that is represented by a linear combination of radial basis functions (RBFs). Unfortunately, this is tedious to implement for images because of the poor conditioning of RBFs and their lack of decay. Here, we present a much more efficient method for the special case of a uniform grid. The key idea is to express Duchon's solution in a fractional polyharmonic B-spline basis that spans the same space as the RBFs. This allows us to derive an algorithm where the smoothing is performed by filtering in the Fourier domain. Next, we prove that the above smoothing spline can be optimally tuned to provide the MMSE estimation of a fractional Brownian field corrupted by white noise. This is a strong result that not only yields the best linear filter (Wiener solution), but also the optimal interpolation space, which is not bandlimited. It also suggests a way of using the noisy data to identify the optimal parameters (order of the spline and smoothing strength), which yields a fully automatic smoothing procedure. We evaluate the performance of our algorithm by comparing it against an oracle Wiener filter, which requires the knowledge of the true noiseless power spectrum of the signal. We find that our approach performs almost as well as the oracle solution over a wide range of conditions. PMID:16948307

  10. Algebraic grid generation using tensor product B-splines. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Saunders, B. V.

    1985-01-01

    Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.

  11. Generalized b-spline subdivision-surface wavelets and lossless compression

    SciTech Connect

    Bertram, M; Duchaineau, M A; Hamann, B; Joy, K I

    1999-11-24

    We present a new construction of wavelets on arbitrary two-manifold topology for geometry compression. The constructed wavelets generalize symmetric tensor product wavelets with associated B-spline scaling functions to irregular polygonal base mesh domains. The wavelets and scaling functions are tensor products almost everywhere, except in the neighborhoods of some extraordinary points (points of valence unequal four) in the base mesh that defines the topology. The compression of arbitrary polygonal meshes representing isosurfaces of scalar-valued trivariate functions is a primary application. The main contribution of this paper is the generalization of lifted symmetric tensor product B-spline wavelets to two-manifold geometries. Surfaces composed of B-spline patches can easily be converted to this scheme. We present a lossless compression method for geometries with or without associated functions like color, texture, or normals. The new wavelet transform is highly efficient and can represent surfaces at any level of resolution with high degrees of continuity, except at a finite number of extraordinary points in the base mesh. In the neighborhoods of these points detail can be added to the surface to approximate any degree of continuity.

  12. Beyond NURBS: enhancement of local refinement through T-splines

    NASA Astrophysics Data System (ADS)

    Bailey, Edward; Carayon, Sebastien

    2007-09-01

    The application of NURBS or Non-Uniform B-Splines in free-form optical design has existed for several years. There are cases however where NURBS geometry presents limitations. Tuning the control points of a spline patch may require a large number of faceted polygons to obtain fine structure. The knot pairs in either a uniform or non-uniform B-spline surface require rectangular grid arrangement. Although knot insertion retains surface continuity it adds superfluous complexity as knots require the insertion of entire rows of control points. Adding rows of control points for each new knot to satisfy the NURBS topology needlessly complicates the optical control surface. Stitching together NURBS patches also produces a high probability of continuity error at rib patch junctions which can produce unwanted ripples and holes. The computational cost for altering seed patch control points at NURBS spline patch junctions hampers the performance of a global non-imaging optimizer and reduces the possibility of exploring the more interesting areas of a problem topology within realistic time constraints. Advancements in CAGD or Computer Aided Graphic Design which overcome the common limitations of NURBS can significantly improve free-form possibilities. In most cases control points can be reduced by 60% or more to represent the same free-form geometry. T-splines and other recent CAGD advancements also accelerate local refinement by simplifying control point addition, allowing the designer to increase optical control surface detail where needed.

  13. Optimal finite-thrust spacecraft trajectories using collocation and nonlinear programming

    NASA Technical Reports Server (NTRS)

    Enright, Paul J.; Conway, Bruce A.

    1990-01-01

    A new method is described for the determination of optimal spacecraft trajectories in an inverse-square field using finite, fixed thrust. The method employs a recently developed optimization technique which uses a piecewise polynomial representation for the state and controls, and collocation, thus converting the optimal control problem into a nonlinear programming problem, which is solved numerically. This technique has been modified to provide efficient handling of those portions of the trajectory which can be determined analytically, i.e., the coast arcs. Among the problems that have been solved using this method are optimal rendezvous and transfer (including multirevolution cases) and optimal multiburn orbit insertion from hyperbolic approach.

  14. The solution of singular optimal control problems using direct collocation and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Downey, James R.; Conway, Bruce A.

    1992-08-01

    This paper describes work on the determination of optimal rocket trajectories which may include singular arcs. In recent years direct collocation and nonlinear programming has proven to be a powerful method for solving optimal control problems. Difficulties in the application of this method can occur if the problem is singular. Techniques exist for solving singular problems indirectly using the associated adjoint formulation. Unfortunately, the adjoints are not a part of the direct formulation. It is shown how adjoint information can be obtained from the direct method to allow the solution of singular problems.

  15. Growth curve analysis for plasma profiles using smoothing splines

    SciTech Connect

    Imre, K.

    1993-05-01

    We are developing a profile analysis code for the statistical estimation of the parametric dependencies of the temperature and density profiles in tokamaks. Our code uses advanced statistical techniques to determine the optimal fit, i.e. the fit which minimized the predictive error. For a forty TFTR Ohmic profile dataset, our preliminary results indicate that the profile shape depends almost exclusively on q[sub a][prime] but that the shape dependencies are not Gaussian. We are now comparing various shape models on the TFTR data. In the first six months, we have completed the core modules of the code, including a B-spline package for variable knot locations, a data-based method to determine the optimal smoothing parameters, self-consistent estimation of the bias errors, and adaptive fitting near the plasma edge. Visualization graphics already include three dimensional surface plots, and discharge by discharge plots of the predicted curves with error bars together with the actual measurements values, and plots of the basis functions with errors.

  16. Error bounded conic spline approximation for NC code

    NASA Astrophysics Data System (ADS)

    Shen, Liyong

    2012-01-01

    Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.

  17. L2 Learner Production and Processing of Collocation: A Multi-Study Perspective

    ERIC Educational Resources Information Center

    Siyanova, Anna; Schmitt, Norbert

    2008-01-01

    This article presents a series of studies focusing on L2 production and processing of adjective-noun collocations (e.g., "social services"). In Study 1, 810 adjective-noun collocations were extracted from 31 essays written by Russian learners of English. About half of these collocations appeared frequently in the British National Corpus (BNC);

  18. English Learners' Knowledge of Prepositions: Collocational Knowledge or Knowledge Based on Meaning?

    ERIC Educational Resources Information Center

    Mueller, Charles M.

    2011-01-01

    Second language (L2) learners' successful performance in an L2 can be partly attributed to their knowledge of collocations. In some cases, this knowledge is accompanied by knowledge of the semantic and/or grammatical patterns that motivate the collocation. At other times, collocational knowledge may serve a compensatory role. To determine the

  19. The Use of Collocations by Advanced Learners of English and Some Implications for Teaching.

    ERIC Educational Resources Information Center

    Nesselhauf, Nadja

    2003-01-01

    Analyzes the use of verb-noun collocations by advanced German speaking students of English in free written production. Types of mistakes that learners make when producing collocations are identified. The influence of the degree of restriction of a combination and of learners' first language on the production of collocations is investigated. (VWL)

  20. Collocational Strategies of Arab Learners of English: A Study in Lexical Semantics.

    ERIC Educational Resources Information Center

    Muhammad, Raji Zughoul; Abdul-Fattah, Hussein S.

    Arab learners of English encounter a serious problem with collocational sequences. The present study purports to determine the extent to which university English language majors can use English collocations properly. A two-form translation test of 16 Arabic collocations was administered to both graduate and undergraduate students of English. The

  1. An Exploratory Study of Collocational Use by ESL Students--A Task Based Approach

    ERIC Educational Resources Information Center

    Fan, May

    2009-01-01

    Collocation is an aspect of language generally considered arbitrary by nature and problematic to L2 learners who need collocational competence for effective communication. This study attempts, from the perspective of L2 learners, to have a deeper understanding of collocational use and some of the problems involved, by adopting a task based

  2. Application of spectral collocation techniques to the stability of swirling flows

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Ash, Robert L.; Malik, Mujeeb R.

    1989-01-01

    The linearized stability equations in cylindrical coordinates of a Chebyshev spectral collocation method for the temporal and spatial stability of swirling flows are presently solved with the eigenvalues obtained through the use of the QZ routine. The algorithm thus created is robust and easily adaptable to a range of flow configurations encompassing internal and external flows with minor boundary condition application modifications. Accuracy and efficiency tests of the method are made for the cases of plane Poiseulle, rotating-pipe, and trailing line vortex flows.

  3. Reconstruction of egg shape using B-spline

    NASA Astrophysics Data System (ADS)

    Roslan, Nurshazneem; Yahya, Zainor Ridzuan

    2015-05-01

    In this paper, the reconstruction of egg's outline by using piecewise parametric cubic B-spline is proposed. Reverse engineering process has been used to represent the generic shape of egg in order to acquire its geometric information in form of a two-dimensional set of points. For the curve reconstruction, the purpose is to optimize the control points of all curves such that the distance of the data points to the curve is minimized. Then, the B-spline curve functions were used for the curve fitting between the actual and the reconstructed profiles.

  4. How to fly an aircraft with control theory and splines

    NASA Technical Reports Server (NTRS)

    Karlsson, Anders

    1994-01-01

    When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.

  5. Beyond triple collocation: Applications to satellite soil moisture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...

  6. Beyond Single Words: The Most Frequent Collocations in Spoken English

    ERIC Educational Resources Information Center

    Shin, Dongkwang; Nation, Paul

    2008-01-01

    This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section

  7. The Effects of Vocabulary Learning on Collocation and Meaning

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2009-01-01

    This study investigates the effects of receptive and productive vocabulary tasks on learning collocation and meaning. Japanese English as a foreign language students learned target words in three glossed sentences and in a cloze task. To determine the effects of the treatments, four tests were used to measure receptive and productive knowledge of

  8. Higher-order numerical methods derived from three-point polynomial interpolation

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and Hermitian finite-difference discretization. The equations generally apply for both uniform and variable meshes. Hybrid schemes resulting from different polynomial approximations for first and second derivatives lead to the nonuniform mesh extension of the so-called compact or Pade difference techniques. A variety of fourth-order methods are described and this concept is extended to sixth-order. Solutions with these procedures are presented for the similar and non-similar boundary layer equations with and without mass transfer, the Burgers equation, and the incompressible viscous flow in a driven cavity. Finally, the interpolation procedure is used to derive higher-order temporal integration schemes and results are shown for the diffusion equation.

  9. B-spline active rays segmentation of microcalcifications in mammography

    SciTech Connect

    Arikidis, Nikolaos S.; Skiadopoulos, Spyros; Karahaliou, Anna; Likaki, Eleni; Panayiotakis, George; Costaridou, Lena

    2008-11-15

    Accurate segmentation of microcalcifications in mammography is crucial for the quantification of morphologic properties by features incorporated in computer-aided diagnosis schemes. A novel segmentation method is proposed implementing active rays (polar-transformed active contours) on B-spline wavelet representation to identify microcalcification contour point estimates in a coarse-to-fine strategy at two levels of analysis. An iterative region growing method is used to delineate the final microcalcification contour curve, with pixel aggregation constrained by the microcalcification contour point estimates. A radial gradient-based method was also implemented for comparative purposes. The methods were tested on a dataset consisting of 149 mainly pleomorphic microcalcification clusters originating from 130 mammograms of the DDSM database. Segmentation accuracy of both methods was evaluated by three radiologists, based on a five-point rating scale. The radiologists' average accuracy ratings were 3.96{+-}0.77, 3.97{+-}0.80, and 3.83{+-}0.89 for the proposed method, and 2.91{+-}0.86, 2.10{+-}0.94, and 2.56{+-}0.76 for the radial gradient-based method, respectively, while the differences in accuracy ratings between the two segmentation methods were statistically significant (Wilcoxon signed-ranks test, p<0.05). The effect of the two segmentation methods in the classification of benign from malignant microcalcification clusters was also investigated. A least square minimum distance classifier was employed based on cluster features reflecting three morphological properties of individual microcalcifications (area, length, and relative contrast). Classification performance was evaluated by means of the area under ROC curve (A{sub z}). The area and length morphologic features demonstrated a statistically significant (Mann-Whitney U-test, p<0.05) higher patient-based classification performance when extracted from microcalcifications segmented by the proposed method (0.82{+-}0.06 and 0.86{+-}0.05, respectively), as compared to segmentation by the radial gradient-based method (0.71{+-}0.08 and 0.75{+-}0.08). The proposed method demonstrates improved segmentation accuracy, fulfilling human visual criteria, and enhances the ability of morphologic features to characterize microcalcification clusters.

  10. L1 Influence on the Acquisition of L2 Collocations: Japanese ESL Users and EFL Learners Acquiring English Collocations

    ERIC Educational Resources Information Center

    Yamashita, Junko; Jiang, Nan

    2010-01-01

    This study investigated first language (L1) influence on the acquisition of second language (L2) collocations using a framework based on Kroll and Stewart (1994) and Jiang (2000), by comparing the performance on a phrase-acceptability judgment task among native speakers of English, Japanese English as a second language (ESL) users, and Japanese

  11. L1 Influence on the Acquisition of L2 Collocations: Japanese ESL Users and EFL Learners Acquiring English Collocations

    ERIC Educational Resources Information Center

    Yamashita, Junko; Jiang, Nan

    2010-01-01

    This study investigated first language (L1) influence on the acquisition of second language (L2) collocations using a framework based on Kroll and Stewart (1994) and Jiang (2000), by comparing the performance on a phrase-acceptability judgment task among native speakers of English, Japanese English as a second language (ESL) users, and Japanese…

  12. Temi firthiani di linguistica applicata: "Restricted Languages" e "Collocation" (Firthian Themes in Applied Linguistics: "Restricted Languages" and "Collocation")

    ERIC Educational Resources Information Center

    Leonardi, Magda

    1977-01-01

    Discusses the importance of two Firthian themes for language teaching. The first theme, "Restricted Languages," concerns the "microlanguages" of every language (e.g., literary language, scientific, etc.). The second theme, "Collocation," shows that equivalent words in two languages rarely have the same position in both languages. (Text is in

  13. Radial Splines Would Prevent Rotation Of Bearing Race

    NASA Technical Reports Server (NTRS)

    Kaplan, Ronald M.; Chokshi, Jaisukhlal V.

    1993-01-01

    Interlocking fine-pitch ribs and grooves formed on otherwise flat mating end faces of housing and outer race of rolling-element bearing to be mounted in housing, according to proposal. Splines bear large torque loads and impose minimal distortion on raceway.

  14. Quantifying cervical-spine curvature using Bézier splines.

    PubMed

    Klinich, Kathleen D; Ebert, Sheila M; Reed, Matthew P

    2012-11-01

    Knowledge of the distributions of cervical-spine curvature is needed for computational studies of cervical-spine injury in motor-vehicle crashes. Many methods of specifying spinal curvature have been proposed, but they often involve qualitative assessment or a large number of parameters. The objective of this study was to develop a quantitative method of characterizing cervical-spine curvature using a small number of parameters. 180 sagittal X-rays of subjects seated in automotive posture with their necks in neutral, flexed, and extended postures were collected in the early 1970s. Subjects were selected to represent a range of statures and ages for each gender. X-rays were reanalyzed using advanced technology and statistical methods. Coordinates of the posterior margins of the vertebral bodies and dens were digitized. Bézier splines were fit through the coordinates of these points. The interior control points that define the spline curvature were parameterized as a vector angle and length. By defining the length as a function of the angle, cervical-spine curvature was defined with just two parameters: superior and inferior Bézier angles. A classification scheme was derived to sort each curvature by magnitude and type of curvature (lordosis versus S-shaped versus kyphosis; inferior or superior location). Cervical-spine curvature in an automotive seated posture varies with gender and age but not stature. Average values of superior and inferior Bézier angles for cervical spines in flexion, neutral, and extension automotive postures are presented for each gender and age group. Use of Bézier splines fit through posterior margins offers a quantitative method of characterizing cervical-spine curvature using two parameters: superior and inferior Bézier angles. PMID:23387791

  15. Adaptive Multilevel Second-Generation Wavelet Collocation Elliptic Solver: A Cure for High Viscosity Contrasts

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. N.; Vasilyev, O. V.; Yuen, D. A.

    2003-12-01

    An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is developed. The method is based on the general class of multi-dimensional second generation wavelets and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems. Wavelet decomposition is used for grid adaptation and interpolation, while O(N) hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the iterative solver, an iterative procedure analogous to the multigrid algorithm is developed. For the problems with slowly varying viscosity simple diagonal preconditioning works. For problems with large laterally varying viscosity contrasts either direct solver on shared-memory machines or multilevel iterative solver with incomplete LU preconditioner may be used. The method is demonstrated for the solution of a number of two-dimensional elliptic test problems with both constant and spatially varying viscosity with multiscale character.

  16. A coupled ensemble filtering and probabilistic collocation approach for uncertainty quantification of hydrological models

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, W. W.; Li, Y. P.; Huang, G. H.; Huang, K.

    2015-11-01

    In this study, a coupled ensemble filtering and probabilistic collocation (EFPC) approach is proposed for uncertainty quantification of hydrologic models. This approach combines the capabilities of the ensemble Kalman filter (EnKF) and the probabilistic collocation method (PCM) to provide a better treatment of uncertainties in hydrologic models. The EnKF method would be employed to approximate the posterior probabilities of model parameters and improve the forecasting accuracy based on the observed measurements; the PCM approach is proposed to construct a model response surface in terms of the posterior probabilities of model parameters to reveal uncertainty propagation from model parameters to model outputs. The proposed method is applied to the Xiangxi River, located in the Three Gorges Reservoir area of China. The results indicate that the proposed EFPC approach can effectively quantify the uncertainty of hydrologic models. Even for a simple conceptual hydrological model, the efficiency of EFPC approach is about 10 times faster than traditional Monte Carlo method without obvious decrease in prediction accuracy. Finally, the results can explicitly reveal the contributions of model parameters to the total variance of model predictions during the simulation period.

  17. Evaluation of the spline reconstruction technique for PET

    SciTech Connect

    Kastis, George A. Kyriakopoulou, Dimitra; Gaitanis, Anastasios; Fernández, Yolanda; Hutton, Brian F.; Fokas, Athanasios S.

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real point-source studies. In all simulated phantoms, the SRT exhibits higher contrast and lower bias than FBP at all noise levels, by increasing the COV in the reconstructed images. Finally, in real studies, whereas the contrast of the cold chambers are similar for both algorithms, the SRT reconstructed images of the NEMA phantom exhibit slightly higher COV values than those of FBP. In the Derenzo phantom, SRT resolves the 2-mm separated holes slightly better than FBP. The small-animal and human reconstructions via SRT exhibit slightly higher resolution and contrast than the FBP reconstructions. Conclusions: The SRT provides images of higher resolution, higher contrast, and lower bias than FBP, by increasing slightly the noise in the reconstructed images. Furthermore, it eliminates streak artifacts outside the object boundary. Unlike other analytic algorithms, the reconstruction time of SRT is comparable with that of FBP. The source code for SRT will become available in a future release of STIR.

  18. Novel spline-based approach for robust strain estimation in elastography.

    PubMed

    Alam, S Kaisar

    2010-04-01

    Robust strain estimation is important in elastography. However, a high signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) are sometimes attained by sacrificing resolution. We propose a least-squares-based smoothing-spline strain estimator that can produce elastograms with high SNR and CNR without significant loss of resolution. The proposed method improves strain-estimation quality by deemphasing displacements with lower correlation in computing strains. Results from finite-element simulation and phantom-experiment data demonstrate that the described strain estimator provides good SNR and CNR without degrading resolution. PMID:20687277

  19. B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms

    SciTech Connect

    Bueno, G.; Ruiz, M.; Sanchez, S

    2006-10-04

    Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.

  20. Cubic Hermite Bezier spline based reconstruction of implanted aortic valve stents from CT images.

    PubMed

    Gessat, Michael; Altwegg, Lukas; Frauenfelder, Thomas; Plass, Andr; Falk, Volkmar

    2011-01-01

    Mechanical forces and strain induced by transcatheter aortic valve implantation are usually named as origins for postoperative left ventricular arrhythmia associated with the technique. No quantitative data has been published so far to substantiate this common belief. As a first step towards quantitative analysis of the biomechanic situation at the aortic root after transapical aortic valve implantation, we present a spline-based method for reconstruction of the implanted stent from CT images and for locally measuring the deformation of the stent. PMID:22254890

  1. Radiation energy budget studies using collocated AVHRR and ERBE observations

    SciTech Connect

    Ackerman, S.A.; Inoue, Toshiro

    1994-03-01

    Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert. 25 refs., 8 figs.

  2. Radiation energy budget studies using collocated AVHRR and ERBE observations

    NASA Technical Reports Server (NTRS)

    Ackerman, Steven A.; Inoue, Toshiro

    1994-01-01

    Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert.

  3. Low speed wind tunnel investigation of span load alteration, forward-located spoilers, and splines as trailing-vortex-hazard alleviation devices on a transport aircraft model

    NASA Technical Reports Server (NTRS)

    Croom, D. R.; Dunham, R. E., Jr.

    1975-01-01

    The effectiveness of a forward-located spoiler, a spline, and span load alteration due to a flap configuration change as trailing-vortex-hazard alleviation methods was investigated. For the transport aircraft model in the normal approach configuration, the results indicate that either a forward-located spoiler or a spline is effective in reducing the trailing-vortex hazard. The results also indicate that large changes in span loading, due to retraction of the outboard flap, may be an effective method of reducing the trailing-vortex hazard.

  4. A Corpus-Based Study of the Linguistic Features and Processes Which Influence the Way Collocations Are Formed: Some Implications for the Learning of Collocations

    ERIC Educational Resources Information Center

    Walker, Crayton Phillip

    2011-01-01

    In this article I examine the collocational behaviour of groups of semantically related verbs (e.g., "head, run, manage") and nouns (e.g., "issue, factor, aspect") from the domain of business English. The results of this corpus-based study show that much of the collocational behaviour exhibited by these lexical items can be explained by examining

  5. BSR: B-spline atomic R-matrix codes

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg

    2006-02-01

    BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput. Phys. Comm. 92 (1995) 290].

  6. Explicit solutions for collocated structural control with guaranteed \\mathcal {H}_2 norm performance specifications

    NASA Astrophysics Data System (ADS)

    Meisami-Azad, Mona; Mohammadpour, Javad; Grigoriadis, Karolos M.

    2009-03-01

    This paper presents explicit solutions for velocity feedback control of structural systems with collocated sensors and actuators to satisfy closed-loop \\mathcal {H}_{2} and \\mathcal {L}_2-\\mathcal {L}_\\infty norm performance specifications. First, we consider an open-loop collocated structural system and obtain upper bounds for the \\mathcal {H}_{2} and \\mathcal {L}_2-\\mathcal {L}_\\infty system norms using a solution for the linear matrix inequality formulation of the norm analysis conditions. Next, we address the problem of static output velocity feedback controller design for such systems. By employing simple algebraic tools, we derive an explicit parametrization of the controller gains which guarantee a prescribed level of \\mathcal {H}_{2} or \\mathcal {L}_2-\\mathcal {L}_\\infty norm performance of the closed-loop system. Finally, numerical examples are presented to validate the advantages of the proposed techniques. The effectiveness of the proposed bounds and output feedback control design methods become apparent, especially in very large-scale structural systems where control design methods based on the solution of Lyapunov or Riccati equations are time-consuming or intractable.

  7. Fast Simulation of X-ray Projections of Spline-based Surfaces using an Append Buffer

    PubMed Central

    Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca

    2012-01-01

    Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector, and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically. Source code is available at http://conrad.stanford.edu/ PMID:22975431

  8. Fast simulation of x-ray projections of spline-based surfaces using an append buffer

    NASA Astrophysics Data System (ADS)

    Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca

    2012-10-01

    Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, the simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640 480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically.

  9. Composite multi-modal vibration control for a stiffened plate using non-collocated acceleration sensor and piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Li, Shengquan; Li, Juan; Mo, Yueping; Zhao, Rong

    2014-01-01

    A novel active method for multi-mode vibration control of an all-clamped stiffened plate (ACSP) is proposed in this paper, using the extended-state-observer (ESO) approach based on non-collocated acceleration sensors and piezoelectric actuators. Considering the estimated capacity of ESO for system state variables, output superposition and control coupling of other modes, external excitation, and model uncertainties simultaneously, a composite control method, i.e., the ESO based vibration control scheme, is employed to ensure the lumped disturbances and uncertainty rejection of the closed-loop system. The phenomenon of phase hysteresis and time delay, caused by non-collocated sensor/actuator pairs, degrades the performance of the control system, even inducing instability. To solve this problem, a simple proportional differential (PD) controller and acceleration feed-forward with an output predictor design produce the control law for each vibration mode. The modal frequencies, phase hysteresis loops and phase lag values due to non-collocated placement of the acceleration sensor and piezoelectric patch actuator are experimentally obtained, and the phase lag is compensated by using the Smith Predictor technology. In order to improve the vibration control performance, the chaos optimization method based on logistic mapping is employed to auto-tune the parameters of the feedback channel. The experimental control system for the ACSP is tested using the dSPACE real-time simulation platform. Experimental results demonstrate that the proposed composite active control algorithm is an effective approach for suppressing multi-modal vibrations.

  10. Defining window-boundaries for genomic analyses using smoothing spline techniques

    DOE PAGESBeta

    Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia

    2015-04-17

    High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore » and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less

  11. Defining window-boundaries for genomic analyses using smoothing spline techniques

    SciTech Connect

    Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia

    2015-04-17

    High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the data and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.

  12. Monotonicity preserving splines using rational Ball cubic interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Jamal, Ena; Ali, Jamaludin Md.

    2015-10-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data which preserves certain shape properties of the data such as positivity, monotonicity or convexity [1]. The required curves have to be a smooth shape-preserving interpolation. In this paper a rational cubic spline in Ball representation is developed to generate an interpolant that preserves monotonicity. In this paper to control the shape of the interpolant three shape parameters are introduced. The shape parameters in the description of the rational cubic interpolation are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolation are derived and visually the proposed rational cubic interpolant gives a very pleasing result.

  13. Control theory and splines, applied to signature storage

    NASA Technical Reports Server (NTRS)

    Enqvist, Per

    1994-01-01

    In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.

  14. A rational trigonometric spline to visualize positive data

    NASA Astrophysics Data System (ADS)

    Bashir, Uzma; Ali, Jamaludin Md.

    2014-07-01

    In this paper, we construct a cubic trigonometric Bzier curve with two shape parameters on the basis of cubic trigonometric Bernstein-like blending functions. The proposed curve has all geometric properties of the ordinary cubic Bzier curve. Later, based on these trigonometric blending functions a C1 rational trigonometric spline with four shape parameters to preserve positivity of positive data is generated. Simple data dependent constraints are developed for these shape parameters to get a graphically smooth and visually pleasant curve.

  15. Explicit B-spline regularization in diffeomorphic image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  16. Modified simple formulation on a collocated grid with an assessment of the simplified QUICK scheme

    SciTech Connect

    Rahman, M.M.; Miettinen, A.; Siikonen, T.

    1996-10-01

    The simplified QUICK scheme (transverse curvature terms are neglected) is extended to a nonuniform, rectangular, collocated grid system for the solution of two-dimensional fluid flow problems using a vertex-based finite-volume approximation. The influence of the non-pressure gradient source term is added to the Rhie-Chow interpolation method, and a local-mode Fourier analysis of the modified scheme demonstrates that characteristically, it is strongly elliptic and has a high-frequency damping capability, which effectively eliminates the grid-scale pressure oscillations. Within this framework, the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) iteration procedure is constructed. A comparison between the present method and the control-volume-based finite-element method (CVFEM) with vorticity-stream function formulation for free convection in a cavity indicates that the proposed scheme can be applied successfully to fluid flow and heat transfer problems.

  17. Change detection of lung cancer using image registration and thin-plate spline warping

    NASA Astrophysics Data System (ADS)

    Almasslawi, Dawood M. S.; Kabir, Ehsanollah

    2011-06-01

    Lung cancer has the lowest survival rate comparing to other types of cancer and determination of the patient's cancer stage is the most vital issue regarding the cancer treatment process. In most cases accurate estimation of the cancer stage is not easy to achieve. The changes in the size of the primary tumor can be detected using image registration techniques. The registration method proposed in this paper uses Normalized Mutual Information metric and Thin-Plate Spline transformation function for the accurate determination of the correspondence between series of the lung cancer Computed Tomography images. The Normalized Mutual Information is used as a metric for the rigid registration of the images to better estimate the global motion of the tissues and the Thin Plate Spline is used to deform the image in a locally supported manner. The Control Points needed for the transformation are extracted semiautomatically. This new approach in change detection of the lung cancer is implemented using the Insight Toolkit. The results from implementing this method on the CT images of 8 patients provided a satisfactory quality for change detection of the lung cancer.

  18. Reuse of B-spline-based shape interrogation tools for triangular mesh models

    NASA Astrophysics Data System (ADS)

    Kobashi, Yuji; Suzuki, Junya; Joo, Han Kyul; Maekawa, Takashi

    2012-06-01

    In many engineering applications, a smooth surface is often approximated by a mesh of polygons. In a number of downstream applications, it is frequently necessary to estimate the differential invariant properties of the underlying smooth surfaces of the mesh. Such applications include first-order surface interrogation methods that entail the use of isophotes, reflection lines, and highlight lines, and second-order surface interrogation methods such as the computation of geodesics, geodesic offsets, lines of curvature, and detection of umbilics. However, we are not able to directly apply these tools that were developed for B-spline surfaces to tessellated surfaces. This article describes a unifying technique that enables us to use the shape interrogation tools developed for B-spline surface on objects represented by triangular meshes. First, the region of interest of a given triangular mesh is transformed into a graph function (z=h(x,y)) so that we can treat the triangular domain within the rectangular domain. Each triangular mesh is then converted into a cubic graph triangular Bzier patch so that the positions as well as the derivatives of the surface can be evaluated for any given point (x,y) in the domain. A number of illustrative examples are given that show the effectiveness of our algorithm. [Figure not available: see fulltext.

  19. Numerical aspects of a spline-based multiresolution recovery of the harmonic mass density out of gravity functionals

    NASA Astrophysics Data System (ADS)

    Michel, Volker; Wolf, Kerstin

    2008-04-01

    We show the numerical applicability of a multiresolution method based on harmonic splines on the 3-D ball which allows the regularized recovery of the harmonic part of the Earth's mass density distribution out of different types of gravity data, for example, different radial derivatives of the potential, at various positions which need not be located on a common sphere. This approximated harmonic density can be combined with its orthogonal anharmonic complement, for example, determined out of the splitting function of free oscillations, to an approximation of the whole mass density function. The applicability of the presented tool is demonstrated by several test calculations based on simulated gravity values derived from EGM96. The method yields a multiresolution in the sense that the localization of the constructed spline basis functions can be increased which yields in combination with more data a higher resolution of the resulting spline. Moreover, we show that a locally improved data situation allows a highly resolved recovery in this particular area in combination with a coarse approximation elsewhere which is an essential advantage of this method, for example, compared to polynomial approximation.

  20. Adaptation of a cubic smoothing spline algorithm for multi-channel data stitching at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Brown, Charles G., Jr.; Adcock, Aaron B.; Azevedo, Stephen G.; Liebman, Judith A.; Bond, Essex J.

    2011-03-01

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, timevarying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.

  1. Adaptation of a cubic smoothing spline algortihm for multi-channel data stitching at the National Ignition Facility

    SciTech Connect

    Brown, C; Adcock, A; Azevedo, S; Liebman, J; Bond, E

    2010-12-28

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.

  2. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  3. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  4. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    SciTech Connect

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  5. Communication: favorable dimensionality scaling of rectangular collocation with adaptable basis functions up to 7 dimensions.

    PubMed

    Manzhos, Sergei; Chan, Matthew; Carrington, Tucker

    2013-08-01

    We show that by using a rectangular collocation method with a small basis of parameterized functions, it is possible to compute a vibrational spectrum by solving the Schro?dinger equation in 7D from a small number of ab initio calculations without a potential surface. The method is ideal for spectra of molecules adsorbed on a surface. In this paper, it is applied to calculate experimentally relevant energy levels of acetic acid adsorbed on the (101) surface of anatase TiO2. In this case, to obtain levels of experimental accuracy, increasing the number of dimensions from 4 to 7 increases the number of required potential points from about 1000 to about 10,000 and the number of basis functions from 126 to 792: the scaling is very attractive. PMID:23927236

  6. A numerical solution of the linear Boltzmann equation using cubic B-splines.

    PubMed

    Khurana, Saheba; Thachuk, Mark

    2012-03-01

    A numerical method using cubic B-splines is presented for solving the linear Boltzmann equation. The collision kernel for the system is chosen as the Wigner-Wilkins kernel. A total of three different representations for the distribution function are presented. Eigenvalues and eigenfunctions of the collision matrix are obtained for various mass ratios and compared with known values. Distribution functions, along with first and second moments, are evaluated for different mass and temperature ratios. Overall it is shown that the method is accurate and well behaved. In particular, moments can be predicted with very few points if the representation is chosen well. This method produces sparse matrices, can be easily generalized to higher dimensions, and can be cast into efficient parallel algorithms. PMID:22401425

  7. Image Quality Improvement in Adaptive Optics Scanning Laser Ophthalmoscopy Assisted Capillary Visualization Using B-spline-based Elastic Image Registration

    PubMed Central

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796

  8. A Logarithmic Complexity Floating Frame of Reference Formulation with Interpolating Splines for Articulated Multi-Flexible-Body Dynamics

    PubMed Central

    Ahn, W.; Anderson, K.S.; De, S.

    2013-01-01

    An interpolating spline-based approach is presented for modeling multi-flexible-body systems in the divide-and-conquer (DCA) scheme. This algorithm uses the floating frame of reference formulation and piecewise spline functions to construct and solve the non-linear equations of motion of the multi-flexible-body system undergoing large rotations and translations. The new approach is compared with the flexible DCA (FDCA) that uses the assumed modes method [1]. The FDCA, in many cases, must resort to sub-structuring to accurately model the deformation of the system. We demonstrate, through numerical examples, that the interpolating spline-based approach is comparable in accuracy and superior in efficiency to the FDCA. The present approach is appropriate for modeling flexible mechanisms with thin 1D bodies undergoing large rotations and translations, including those with irregular shapes. As such, the present approach extends the current capability of the DCA to model deformable systems. The algorithm retains the theoretical logarithmic complexity inherent in the DCA when implemented in parallel. PMID:24124265

  9. Numerical solution of the time-independent Dirac equation for diatomic molecules: B splines without spurious states

    NASA Astrophysics Data System (ADS)

    Fillion-Gourdeau, Franois; Lorin, Emmanuel; Bandrauk, Andr D.

    2012-02-01

    Two numerical methods are used to evaluate the relativistic spectrum of the two-center Coulomb problem (for the H2+ and Th2179+ diatomic molecules) in the fixed nuclei approximation by solving the single-particle time-independent Dirac equation. The first one is based on a min-max principle and uses a two-spinor formulation as a starting point. The second one is the Rayleigh-Ritz variational method combined with kinematically balanced basis functions. Both methods use a B-spline basis function expansion. We show that accurate results can be obtained with both methods and that no spurious states appear in the discretization process.

  10. Use of tensor product splines in magnet optimization

    SciTech Connect

    Davey, K.R. )

    1999-05-01

    Variational Metrics and other direct search techniques have proved useful in magnetic optimization. At least one technique used in magnetic optimization is to first fit the data of the desired optimization parameter to the data. If this fit is smoothly differentiable, a number of powerful techniques become available for the optimization. The author shows the usefulness of tensor product splines in accomplishing this end. Proper choice of augmented knot placement not only makes the fit very accurate, but allows for differentiation. Thus the gradients required with direct optimization in divariate and trivariate applications are robustly generated.

  11. Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing

    ERIC Educational Resources Information Center

    Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh

    2015-01-01

    Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of

  12. Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2011-01-01

    This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the

  13. A formulation consideration for orthogonal collocation procedures. [for parabolic differential equations simulating fluid-mechanical processes

    NASA Technical Reports Server (NTRS)

    Lashmet, P. K.; Woodrow, P. T.

    1975-01-01

    Numerical instabilities often arise in the use of high-ordered collocation approximations for numerically solving parabolic partial differential equations. These problems may be reduced by formulations involving evaluation of collocation polynomials rather than combination of the polynomials into a power series. As an illustration, two formulations using shifted Legendre polynomials of order 26 and less are compared.

  14. Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations

    ERIC Educational Resources Information Center

    Liu, Dilin

    2010-01-01

    Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,

  15. Study on the Causes and Countermeasures of the Lexical Collocation Mistakes in College English

    ERIC Educational Resources Information Center

    Yan, Hansheng

    2010-01-01

    The lexical collocation in English is an important content in the linguistics theory, and also a research topic which is more and more emphasized in English teaching practice of China. The collocation ability of English decides whether learners could masterly use real English in effective communication. In many years' English teaching practice,

  16. Effects of Web-Based Concordancing Instruction on EFL Students' Learning of Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Chan, Tun-pei; Liou, Hsien-Chin

    2005-01-01

    This study investigates the influence of using five web-based practice units on English verb-noun collocations with the design of a web-based Chinese-English bilingual concordancer (keyword retrieval program) on collocation learning. Thirty-two college EFL students participated by taking a pre-test and two post-tests, and responding to a

  17. On the Effect of Gender and Years of Instruction on Iranian EFL Learners' Collocational Competence

    ERIC Educational Resources Information Center

    Ganji, Mansoor

    2012-01-01

    This study investigates the Iranian EFL learners' Knowledge of Lexical Collocation at three academic levels: freshmen, sophomores, and juniors. The participants were forty three English majors doing their B.A. in English Translation studies in Chabahar Maritime University. They took a 50-item fill-in-the-blank test of lexical collocations. The

  18. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…

  19. Collocation, Semantic Prosody, and Near Synonymy: A Cross-Linguistic Perspective

    ERIC Educational Resources Information Center

    Xiao, Richard; McEnery, Tony

    2006-01-01

    This paper explores the collocational behaviour and semantic prosody of near synonyms from a cross-linguistic perspective. The importance of these concepts to language learning is well recognized. Yet while collocation and semantic prosody have recently attracted much interest from researchers studying the English language, there has been little

  20. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic

  1. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    ERIC Educational Resources Information Center

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  2. Investigating the Viability of a Collocation List for Students of English for Academic Purposes

    ERIC Educational Resources Information Center

    Durrant, Philip

    2009-01-01

    A number of researchers are currently attempting to create listings of important collocations for students of EAP. However, so far these attempts have (1) failed to include positionally-variable collocations, and (2) not taken sufficient account of variation across disciplines. The present paper describes the creation of one listing of

  3. A comparison of boundary and global collocation solutions for K(I) and CMOD calibration functions

    SciTech Connect

    Sanford, R.J.; Kirk, M.T. U.S. Navy, David W. Taylor Naval Ship Research and Development Center, Annapolis, MD )

    1991-03-01

    Global and boundary collocation solutions for K(I), CMOD, and the full-field stress patterns of a single-edge notched tension specimen were compared to determine the accuracy of each technique and the utility of each for determining solutions for the short and the deep crack case. It was demonstrated that inclusion of internal stress conditions in the collocation, i.e., performing a global rather than a boundary collocation solution, expands the range of crack lengths over which accurate results can be obtained. In particular, the global collocation approach provided accurate results for crack lengths between 10 percent and 80 percent of the specimen width for a typical specimen geometry. Comparable accuracy for boundary collocation was found only for crack lengths between 20 percent and 60 percent of the specimen width. 27 refs.

  4. Full-turn symplectic map from a generator in a Fourier-spline basis

    SciTech Connect

    Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.

    1993-04-01

    Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton`s method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10{sup 7} turns on an IBM RS6000 model 320 workstation, in a run of about one day.

  5. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  6. Free vibration analysis of axisymmetric laminated composite circular and annular plates using Chebyshev collocation

    NASA Astrophysics Data System (ADS)

    Powmya, A.; Narasimhan, M. C.

    2015-06-01

    Solutions, based on principle of collocating the equations of motion at Chebyshev zeroes, are presented for the free vibration analysis of laminated, polar orthotropic, circular and annular plates. The analysis is restricted to axisymmetric free vibration of the plates and employs first-order shear deformation theory for the displacement field, in terms of midplane displacements, u, ? and w. The eigenvalue problem is defined in terms of three equations of motion in terms of the radial co-ordinate r, the radial variation of the displacements being represented in polynomial series, with appropriate boundary conditions. Numerical results are presented to show the validity and accuracy of the proposed method. Results of parametric studies for laminated polar orthotropic circular and annular plates with different boundary conditions, orthotropic ratios, lamination sequences, number of layers and shear deformation are also presented.

  7. An Automatic Collocation Writing Assistant for Taiwanese EFL Learners: A Case of Corpus-Based NLP Technology

    ERIC Educational Resources Information Center

    Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin

    2008-01-01

    Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation

  8. The determination of gravity anomalies from geoid heights using the inverse Stokes' formula, Fourier transforms, and least squares collocation

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Sjoeberg, L.; Rapp, R. H.

    1978-01-01

    A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.

  9. Generation of global VTEC maps from low latency GNSS observations based on B-spline modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Brger, Klaus; Brandert, Sylvia; Grres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte

    2015-04-01

    In May 2014 DGFI-TUM (the former DGFI) and the German Space Situational Awareness Centre (GSSAC) started to develop an OPerational Tool for Ionospheric Mapping And Prediction (OPTIMAP); since November 2014 the Institute of Astrophysics at the University of Gttingen (IAG) joined the group as the third partner. This project aims on the computation and prediction of maps of the vertical total electron content (VTEC) and the electron density distribution of the ionosphere on a global scale from both various space-geodetic observation techniques such as GNSS and satellite altimetry as well as Sun observations. In this contribution we present first results, i.e. a near-real time processing framework for generating VTEC maps by assimilating GNSS (GPS, GLONASS) based ionospheric data into a two-dimensional global B-spline approach. To be more specific, the spatial variations of VTEC are modelled by trigonometric B-spline functions in longitude and by endpoint-interpolating polynomial B-spline functions in latitude, respectively. Since B-spline functions are compactly supported and highly localizing our approach can handle large data gaps appropriately and, thus, provides a better approximation of data with heterogeneous density and quality compared to the commonly used spherical harmonics. The presented method models temporal variations of VTEC inside a Kalman filter. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. To approximate the temporal variation of these state vector components as part of the filter the dynamical model has to be set up. The current implementation of the filter allows to select between a random walk process, a Gauss-Markov process and a dynamic process driven by an empirical ionosphere model, e.g. the International Reference Ionosphere (IRI). For running the model ionospheric input data is acquired from terrestrial GNSS networks through online archive systems (such as IGS) with approximately one hour latency. Before feeding the filter with new hourly data, the raw GNSS observations are downloaded and pre-processed via geometry free linear combinations to provide signal delay information including the ionospheric effects and the differential code biases. Next steps will implement further space geodetic techniques and will introduce the Sun observations into the procedure. The final destination is to develop a time dependent model of the electron density based on different geodetic and solar observations.

  10. Robustness properties of LQG optimized compensators for collocated rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    In this paper we study the robustness with respect to stability of the closed-loop system with collocated rate sensor using LQG (mean square rate) optimized compensators. Our main result is that the transmission zeros of the compensator are precisely the structure modes when the actuator/sensor locations are 'pinned' and/or 'clamped': i.e., motion in the direction sensed is not allowed. We have stability even under parameter mismatch, except in the unlikely situation where such a mode frequency of the assumed system coincides with an undamped mode frequency of the real system and the corresponding mode shape is an eigenvector of the compensator transfer function matrix at that frequency. For a truncated modal model - such as that of the NASA LaRC Phase Zero Evolutionary model - the transmission zeros of the corresponding compensator transfer function can be interpreted as the structure modes when motion in the directions sensed is prohibited.

  11. The construction of operational matrix of fractional derivatives using B-spline functions

    NASA Astrophysics Data System (ADS)

    Lakestani, Mehrdad; Dehghan, Mehdi; Irandoust-pakchin, Safar

    2012-03-01

    Fractional calculus has been used to model physical and engineering processes that are found to be best described by fractional differential equations. For that reason we need a reliable and efficient technique for the solution of fractional differential equations. Here we construct the operational matrix of fractional derivative of order α in the Caputo sense using the linear B-spline functions. The main characteristic behind the approach using this technique is that it reduces such problems to those of solving a system of algebraic equations thus we can solve directly the problem. The method is applied to solve two types of fractional differential equations, linear and nonlinear. Illustrative examples are included to demonstrate the validity and applicability of the new technique presented in the current paper.

  12. Variable selection in Bayesian smoothing spline ANOVA models: Application to deterministic computer codes

    PubMed Central

    Reich, Brian J.; Storlie, Curtis B.; Bondell, Howard D.

    2009-01-01

    With many predictors, choosing an appropriate subset of the covariates is a crucial, and difficult, step in nonparametric regression. We propose a Bayesian nonparametric regression model for curve-fitting and variable selection. We use the smoothing spline ANOVA framework to decompose the regression function into interpretable main effect and interaction functions. Stochastic search variable selection via MCMC sampling is used to search for models that fit the data well. Also, we show that variable selection is highly-sensitive to hyperparameter choice and develop a technique to select hyperparameters that control the long-run false positive rate. The method is used to build an emulator for a complex computer model for two-phase fluid flow. PMID:19789732

  13. Using a radial ultrasound probe's virtual origin to compute midsagittal smoothing splines in polar coordinates.

    PubMed

    Heyne, Matthias; Derrick, Donald

    2015-12-01

    Tongue surface measurements from midsagittal ultrasound scans are effectively arcs with deviations representing tongue shape, but smoothing-spline analysis of variances (SSANOVAs) assume variance around a horizontal line. Therefore, calculating SSANOVA average curves of tongue traces in Cartesian Coordinates [Davidson, J. Acoust. Soc. Am. 120(1), 407-415 (2006)] creates errors that are compounded at tongue tip and root where average tongue shape deviates most from a horizontal line. This paper introduces a method for transforming data into polar coordinates similar to the technique by Mielke [J. Acoust. Soc. Am. 137(5), 2858-2869 (2015)], but using the virtual origin of a radial ultrasound transducer as the polar origin-allowing data conversion in a manner that is robust against between-subject and between-session variability. PMID:26723359

  14. Hierarchical Volume Representation with 3{radical}2 Subdivision and Trivariate B-Spline Wavelets

    SciTech Connect

    Linsen, L; Gray, JT; Pascucci, V; Duchaineau, M; Hamann, B

    2002-01-11

    Multiresolution methods provide a means for representing data at multiple levels of detail. They are typically based on a hierarchical data organization scheme and update rules needed for data value computation. We use a data organization that is based on what we call n{radical}2 subdivision. The main advantage of subdivision, compared to quadtree (n = 2) or octree (n = 3) organizations, is that the number of vertices is only doubled in each subdivision step instead of multiplied by a factor of four or eight, respectively. To update data values we use n-variate B-spline wavelets, which yields better approximations for each level of detail. We develop a lifting scheme for n = 2 and n = 3 based on the n{radical}2-subdivision scheme. We obtain narrow masks that could also provide a basis for view-dependent visualization and adaptive refinement.

  15. MUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results

    PubMed Central

    Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.

    2008-01-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 10?4 samples in range and 2.2 10?3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 10?3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is broadly applicable across imaging applications. PMID:18807190

  16. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical results and demonstrate the distinctions between the various stochastic identification objectives.

  17. TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)

    EPA Science Inventory

    A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...

  18. On estimating gravity anomalies - A comparison of least squares collocation with conventional least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1977-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.

  19. On estimating gravity anomalies: A comparison of least squares collocation with least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1976-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.

  20. On the interpretation of least squares collocation. [for geodetic data reduction

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.

    1976-01-01

    A demonstration is given of the strict mathematical equivalence between the least squares collocation and the classical minimum variance estimates. It is shown that the least squares collocation algorithms are a special case of the modified minimum variance estimates. The computational efficiency of several forms of the general minimum variance estimation algorithm is discussed. It is pointed out that for certain geodetic applications the least square collocation algorithm may provide a more efficient formulation of the results from the point of view of the computations required.

  1. Collocated comparisons of continuous and filter-based PM2.5 measurements at Fort McMurray, Alberta, Canada

    PubMed Central

    Hsu, Yu-Mei; Wang, Xiaoliang; Chow, Judith C.; Watson, John G.; Percy, Kevin E.

    2016-01-01

    ABSTRACT Collocated comparisons for three PM2.5 monitors were conducted from June 2011 to May 2013 at an air monitoring station in the residential area of Fort McMurray, Alberta, Canada, a city located in the Athabasca Oil Sands Region. Extremely cold winters (down to approximately −40°C) coupled with low PM2.5 concentrations present a challenge for continuous measurements. Both the tapered element oscillating microbalance (TEOM), operated at 40°C (i.e., TEOM40), and Synchronized Hybrid Ambient Real-time Particulate (SHARP, a Federal Equivalent Method [FEM]), were compared with a Partisol PM2.5 U.S. Federal Reference Method (FRM) sampler. While hourly TEOM40 PM2.5 were consistently ~20–50% lower than that of SHARP, no statistically significant differences were found between the 24-hr averages for FRM and SHARP. Orthogonal regression (OR) equations derived from FRM and TEOM40 were used to adjust the TEOM40 (i.e., TEOMadj) and improve its agreement with FRM, particularly for the cold season. The 12-year-long hourly TEOMadj measurements from 1999 to 2011 based on the OR equations between SHARP and TEOM40 were derived from the 2-year (2011–2013) collocated measurements. The trend analysis combining both TEOMadj and SHARP measurements showed a statistically significant decrease in PM2.5 concentrations with a seasonal slope of −0.15 μg m−3 yr−1 from 1999 to 2014.Implications: Consistency in PM2.5 measurements are needed for trend analysis. Collocated comparison among the three PM2.5 monitors demonstrated the difference between FRM and TEOM, as well as between SHARP and TEOM. The orthogonal regressions equations can be applied to correct historical TEOM data to examine long-term trends within the network. PMID:26727574

  2. Using Spline Functions for the Shape Description of the Surface of Shell Structures

    NASA Astrophysics Data System (ADS)

    Lenda, Grzegorz

    2014-12-01

    The assessment of the cover shape of shell structures makes an important issue both from the point of view of safety, as well as functionality of the construction. The most numerous group among this type of constructions are objects having the shape of a quadric (cooling towers, tanks with gas and liquids, radio-telescope dishes etc.). The material from observation of these objects (point sets), collected during periodic measurements is usually converted into a continuous form in the process of approximation, with the use of the quadric surface. The created models, are then applied in the assessment of the deformation of surface in the given period of time. Such a procedure has, however, some significant limitations. The approximation with the use of quadrics, allows the determination of basic dimensions and location of the construction, however it results in ideal objects, not providing any information on local surface deformations. They can only be defined by comparison of the model with the point set of observations. If the periodic measurements are carried out in independent, separate points, then it will be impossible to define the existing deformations directly. The second problem results from the one-equation character of the ideal approximation model. Real deformations of the object change its basic parameters, inter alia the lengths of half-axis of main quadrics. The third problem appears when the construction is not a quadric; no information on the equation describing its shape is available either. Accepting wrong kind of approximation function, causes the creation of a model of large deviations from the observed points. All the mentioned above inconveniences can be avoided by applying splines to the shape description of the surface of shell structures. The use of the function of this type, however, comes across other types of limitations. This study deals with the above subject, presenting several methods allowing the increase of accuracy and decrease of the time of the modelling with the splines.

  3. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  4. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGESBeta

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  5. High Speed and High Accuracy Control of Industrial Articulated Robot Arms with Jerk Restraint by Spline Interpolated Taught Data

    NASA Astrophysics Data System (ADS)

    Goto, Satoru; Iwanaga, Takuya; Kyura, Nobuhiro; Nakamura, Masatoshi

    In industrial robot arms, high speed and high accurate operation is required. However in case of high speed operation, it often arises high jerk, i, e., rapid change of acceleration. Jerk causes deterioration of control performance such as vibration of a tip of a robot arm. It is, therefore, important to reduce jerk during robot arm operation. In this research, spline interpolation is used to reduce jerk under torque and speed constraints. Effectiveness of the proposed method was assured by experimental results and simulation results of an actual robot arm.

  6. B-spline explicit active surfaces: an efficient framework for real-time 3-D region-based segmentation.

    PubMed

    Barbosa, Daniel; Dietenbeck, Thomas; Schaerer, Joel; D'hooge, Jan; Friboulet, Denis; Bernard, Olivier

    2012-01-01

    A new formulation of active contours based on explicit functions has been recently suggested. This novel framework allows real-time 3-D segmentation since it reduces the dimensionality of the segmentation problem. In this paper, we propose a B-spline formulation of this approach, which further improves the computational efficiency of the algorithm. We also show that this framework allows evolving the active contour using local region-based terms, thereby overcoming the limitations of the original method while preserving computational speed. The feasibility of real-time 3-D segmentation is demonstrated using simulated and medical data such as liver computer tomography and cardiac ultrasound images. PMID:22186712

  7. Experimental procedure for the evaluation of tooth stiffness in spline coupling including angular misalignment

    NASA Astrophysics Data System (ADS)

    Cur, Francesca; Mura, Andrea

    2013-11-01

    Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.

  8. Determination of airplane model structure from flight data using splines and stepwise regression

    NASA Technical Reports Server (NTRS)

    Klein, V.; Batterson, J. G.

    1983-01-01

    A procedure for the determination of airplane model structure from flight data is presented. The model is based on a polynomial spline representation of the aerodynamic coefficients, and the procedure is implemented by use of a stepwise regression. First, a form of the aerodynamic force and moment coefficients amenable to the utilization of splines is developed. Next, expressions for the splines in one and two variables are introduced. Then the steps in the determination of an aerodynamic model structure and the estimation of parameters are discussed briefly. The focus is on the application to flight data of the techniques developed.

  9. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  10. The Chebyshev-Legendre method: Implementing Legendre methods on Chebyshev points

    NASA Technical Reports Server (NTRS)

    Don, Wai Sun; Gottlieb, David

    1993-01-01

    We present a new collocation method for the numerical solution of partial differential equations. This method uses the Chebyshev collocation points, but because of the way the boundary conditions are implemented, it has all the advantages of the Legendre methods. In particular, L2 estimates can be obtained easily for hyperbolic and parabolic problems.

  11. An auroral scintillation observation using precise, collocated GPS receivers

    NASA Astrophysics Data System (ADS)

    Garner, T. W.; Harris, R. B.; York, J. A.; Herbster, C. S.; Minter, C. F., III; Hampton, D. L.

    2011-02-01

    On 10 January 2009, an unusual ionospheric scintillation event was observed by a Global Positioning System (GPS) receiver station in Fairbanks, Alaska. The receiver station is part of the National Geospatial-Intelligence Agency's (NGA) Monitoring Station Network (MSN). Each MSN station runs two identical geodetic-grade, dual-frequency, full-code tracking GPS receivers that share a common antenna. At the Fairbanks station, a third separate receiver with a separate antenna is located nearby. During the 10 January event, ionospheric conditions caused two of the receivers to loose lock on a single satellite. The third receiver tracked through the scintillation. The region of scintillation was collocated with an auroral arc and a slant total electron content (TEC) increase of 5.71 TECu (TECu = 1016/m2). The response of the full-code tracking receivers to the scintillation is intriguing. One of these receivers lost lock, but the other receiver did not. This fact argues that a receiver's internal state dictates its reaction to scintillation. Additionally, the scintillation only affected the L2 signal. While this causes the L1 signal to be lost on the semicodelessly receiver, the full-code tracking receiver only lost the L1 signal when the receiver attempted to reacquire the satellite link.

  12. Estimates of Mode-S EHS aircraft derived wind observation errors using triple collocation

    NASA Astrophysics Data System (ADS)

    de Haan, S.

    2015-12-01

    Information on the accuracy of meteorological observation is essential to assess the applicability of the measurement. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from two meter temperature observation to satellite radiances. The drawback is that these comparisons contain also the (unknown) model error. By applying the so-called triple collocation method (Stoffelen, 1998), on two independent observation at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained using Mode-S EHS (de Haan, 2011). Radial wind measurements from Doppler weather Radar and wind vector measurements from Sodar, together with equivalents from a non-hydrostatic numerical weather prediction model are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.

  13. Assessment of adequate quality and collocation of reference measurements with space-borne hyperspectral infrared instruments to validate retrievals of temperature and water vapour

    NASA Astrophysics Data System (ADS)

    Calbet, X.

    2016-01-01

    A method is presented to assess whether a given reference ground-based point observation, typically a radiosonde measurement, is adequately collocated and sufficiently representative of space-borne hyperspectral infrared instrument measurements. Once this assessment is made, the ground-based data can be used to validate and potentially calibrate, with a high degree of accuracy, the hyperspectral retrievals of temperature and water vapour.

  14. Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.

  15. Trajectory Tracking Control of Mobile Robot by Time Based Spline Approach

    NASA Astrophysics Data System (ADS)

    Miyata, Junichi; Murakami, Toshiyuki; Ohnishi, Kouhei

    The mobile robot must move without unacceptable rapid motion. To address this issue, in this paper, a preview controller with time based spline approach is proposed.Using the time based spline approach, it is also important to plan the adequate trajectory.Here an approach to trajectory planning which has the trajectory determination strategy by virtual manipulator is proposed.Numerical and experimental results are shown to confirm the proposed algorithm.

  16. Optimal aeroassisted orbital transfer with plane change using collocation and nonlinear programming

    NASA Technical Reports Server (NTRS)

    Shi, Yun. Y.; Nelson, R. L.; Young, D. H.

    1990-01-01

    The fuel optimal control problem arising in the non-planar orbital transfer employing aeroassisted technology is addressed. The mission involves the transfer from high energy orbit (HEO) to low energy orbit (LEO) with orbital plane change. The basic strategy here is to employ a combination of propulsive maneuvers in space and aerodynamic maneuvers in the atmosphere. The basic sequence of events for the aeroassisted HEO to LEO transfer consists of three phases. In the first phase, the orbital transfer begins with a deorbit impulse at HEO which injects the vehicle into an elliptic transfer orbit with perigee inside the atmosphere. In the second phase, the vehicle is optimally controlled by lift and bank angle modulations to perform the desired orbital plane change and to satisfy heating constraints. Because of the energy loss during the turn, an impulse is required to initiate the third phase to boost the vehicle back to the desired LEO orbital altitude. The third impulse is then used to circularize the orbit at LEO. The problem is solved by a direct optimization technique which uses piecewise polynomial representation for the state and control variables and collocation to satisfy the differential equations. This technique converts the optimal control problem into a nonlinear programming problem which is solved numerically. Solutions were obtained for cases with and without heat constraints and for cases of different orbital inclination changes. The method appears to be more powerful and robust than other optimization methods. In addition, the method can handle complex dynamical constraints.

  17. Prediction of protein structural class using novel evolutionary collocation-based sequence representation.

    PubMed

    Chen, Ke; Kurgan, Lukasz A; Ruan, Jishou

    2008-07-30

    Knowledge of structural classes is useful in understanding of folding patterns in proteins. Although existing structural class prediction methods applied virtually all state-of-the-art classifiers, many of them use a relatively simple protein sequence representation that often includes amino acid (AA) composition. To this end, we propose a novel sequence representation that incorporates evolutionary information encoded using PSI-BLAST profile-based collocation of AA pairs. We used six benchmark datasets and five representative classifiers to quantify and compare the quality of the structural class prediction with the proposed representation. The best, classifier support vector machine achieved 61-96% accuracy on the six datasets. These predictions were comprehensively compared with a wide range of recently proposed methods for prediction of structural classes. Our comprehensive comparison shows superiority of the proposed representation, which results in error rate reductions that range between 14% and 26% when compared with predictions of the best-performing, previously published classifiers on the considered datasets. The study also shows that, for the benchmark dataset that includes sequences characterized by low identity (i.e., 25%, 30%, and 40%), the prediction accuracies are 20-35% lower than for the other three datasets that include sequences with a higher degree of similarity. In conclusion, the proposed representation is shown to substantially improve the accuracy of the structural class prediction. A web server that implements the presented prediction method is freely available at http://biomine.ece.ualberta.ca/Structural_Class/SCEC.html. PMID:18293306

  18. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-11-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.

  19. Smart point landmark distribution for thin-plate splines

    NASA Astrophysics Data System (ADS)

    Lewis, John P.; Hwang, Hea-Juen; Neumann, Ulrich; Enciso, Reyes

    2004-05-01

    Landmark placement is crucial in manual demarcation and registration of anatomical structures, registration of different image modalities (i.e. MRI/CT), labeling training data for lip and face principal component models, training for neural networks, and signal interpolation to name some applications. Although landmark placement at curvature and coordinate extrema (e.g. corners of the mouth, lowest point on the lower lip) is fairly unambiguous, the placement of point landmarks along a linear contour is subjective. Unfortunately the user's choice of landmark placement determines the quality of the resulting registration. In this paper, we present an algorithm to remove these undesired degrees of freedom by re-placing landmarks along the contour. Ambiguous landmarks are moved so as to minimize a thin plate spline energy while constraining the landmarks to the originally specified contour. The resulting landmark placement results in a smoother registration while still interpolating the contours and fixed landmarks. The results show that the ambiguity of manual landmark placement along contours does affect the smoothness of the interpolated registration, and that significantly smoother interpolations can be achieved using our approach. This procedure may also benefit other applications employing landmarks by eliminating unintended curvature (variation) from the landmark data.

  20. Nonparametric inference in hidden Markov models using P-splines.

    PubMed

    Langrock, Roland; Kneib, Thomas; Sohn, Alexander; DeRuiter, Stacy L

    2015-06-01

    Hidden Markov models (HMMs) are flexible time series models in which the distribution of the observations depends on unobserved serially correlated states. The state-dependent distributions in HMMs are usually taken from some class of parametrically specified distributions. The choice of this class can be difficult, and an unfortunate choice can have serious consequences for example on state estimates, and more generally on the resulting model complexity and interpretation. We demonstrate these practical issues in a real data application concerned with vertical speeds of a diving beaked whale, where we demonstrate that parametric approaches can easily lead to overly complex state processes, impeding meaningful biological inference. In contrast, for the dive data, HMMs with nonparametrically estimated state-dependent distributions are much more parsimonious in terms of the number of states and easier to interpret, while fitting the data equally well. Our nonparametric estimation approach is based on the idea of representing the densities of the state-dependent distributions as linear combinations of a large number of standardized B-spline basis functions, imposing a penalty term on non-smoothness in order to maintain a good balance between goodness-of-fit and smoothness. PMID:25586063

  1. The effects of computed tomography image characteristics and knot spacing on the spatial accuracy of B-spline deformable image registration in the head and neck geometry

    PubMed Central

    2014-01-01

    Objectives To explore the effects of computed tomography (CT) image characteristics and B-spline knot spacing (BKS) on the spatial accuracy of a B-spline deformable image registration (DIR) in the head-and-neck geometry. Methods The effect of image feature content, image contrast, noise, and BKS on the spatial accuracy of a B-spline DIR was studied. Phantom images were created with varying feature content and varying contrast-to-noise ratio (CNR), and deformed using a known smooth B-spline deformation. Subsequently, the deformed images were repeatedly registered with the original images using different BKSs. The quality of the DIR was expressed as the mean residual displacement (MRD) between the known imposed deformation and the result of the B-spline DIR. Finally, for three patients, head-and-neck planning CT scans were deformed with a realistic deformation field derived from a rescan CT of the same patient, resulting in a simulated deformed image and an a-priori known deformation field. Hence, a B-spline DIR was performed between the simulated image and the planning CT at different BKSs. Similar to the phantom cases, the DIR accuracy was evaluated by means of MRD. Results In total, 162 phantom registrations were performed with varying CNR and BKSs. MRD-values < 1.0 mm were observed with a BKS between 1020 mm for image contrast ? 250 HU and noise < 200 HU. Decreasing the image feature content resulted in increased MRD-values at all BKSs. Using BKS = 15 mm for the three clinical cases resulted in an average MRD < 1.0 mm. Conclusions For synthetically generated phantoms and three real CT cases the highest DIR accuracy was obtained for a BKS between 1020 mm. The accuracy decreased with decreasing image feature content, decreasing image contrast, and higher noise levels. Our results indicate that DIR accuracy in clinical CT images (typical noise levels < 100 HU) will not be effected by the amount of image noise. PMID:25074293

  2. A high-order conservative collocation scheme and its application to global shallow-water equations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Li, X.; Shen, X.; Xiao, F.

    2015-02-01

    In this paper, an efficient and conservative collocation method is proposed and used to develop a global shallow-water model. Being a nodal type high-order scheme, the present method solves the pointwise values of dependent variables as the unknowns within each control volume. The solution points are arranged as Gauss-Legendre points to achieve high-order accuracy. The time evolution equations to update the unknowns are derived under the flux reconstruction (FR) framework (Huynh, 2007). Constraint conditions used to build the spatial reconstruction for the flux function include the pointwise values of flux function at the solution points, which are computed directly from the dependent variables, as well as the numerical fluxes at the boundaries of the computational element, which are obtained as Riemann solutions between the adjacent elements. Given the reconstructed flux function, the time tendencies of the unknowns can be obtained directly from the governing equations of differential form. The resulting schemes have super convergence and rigorous numerical conservativeness. A three-point scheme of fifth-order accuracy is presented and analyzed in this paper. The proposed scheme is adopted to develop the global shallow-water model on the cubed-sphere grid, where the local high-order reconstruction is very beneficial for the data communications between adjacent patches. We have used the standard benchmark tests to verify the numerical model, which reveals its great potential as a candidate formulation for developing high-performance general circulation models.

  3. A high-order conservative collocation scheme and its application to global shallow water equations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Li, X.; Shen, X.; Xiao, F.

    2014-07-01

    An efficient and conservative collocation method is proposed and used to develop a global shallow water model in this paper. Being a nodal type high-order scheme, the present method solves the point-wise values of dependent variables as the unknowns within each control volume. The solution points are arranged as Gauss-Legendre points to achieve the high-order accuracy. The time evolution equations to update the unknowns are derived under the flux-reconstruction (FR) framework (Huynh, 2007). Constraint conditions used to build the spatial reconstruction for the flux function include the point-wise values of flux function at the solution points, which are computed directly from the dependent variables, as well as the numerical fluxes at the boundaries of the control volume which are obtained as the Riemann solutions between the adjacent cells. Given the reconstructed flux function, the time tendencies of the unknowns can be obtained directly from the governing equations of differential form. The resulting schemes have super convergence and rigorous numerical conservativeness. A three-point scheme of fifth-order accuracy is presented and analyzed in this paper. The proposed scheme is adopted to develop the global shallow-water model on the cubed-sphere grid where the local high-order reconstruction is very beneficial for the data communications between adjacent patches. We have used the standard benchmark tests to verify the numerical model, which reveals its great potential as a candidate formulation for developing high-performance general circulation models.

  4. Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.

    PubMed

    Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei

    2014-02-01

    Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods. PMID:26270906

  5. Parallel iterative solution of the Hermite Collocation equations on GPUs II

    NASA Astrophysics Data System (ADS)

    Vilanakis, N.; Mathioudakis, E.

    2014-03-01

    Hermite Collocation is a high order finite element method for Boundary Value Problems modelling applications in several fields of science and engineering. Application of this integration free numerical solver for the solution of linear BVPs results in a large and sparse general system of algebraic equations, suggesting the usage of an efficient iterative solver especially for realistic simulations. In part I of this work an efficient parallel algorithm of the Schur complement method coupled with Bi-Conjugate Gradient Stabilized (BiCGSTAB) iterative solver has been designed for multicore computing architectures with a Graphics Processing Unit (GPU). In the present work the proposed algorithm has been extended for high performance computing environments consisting of multiprocessor machines with multiple GPUs. Since this is a distributed GPU and shared CPU memory parallel architecture, a hybrid memory treatment is needed for the development of the parallel algorithm. The realization of the algorithm took place on a multiprocessor machine HP SL390 with Tesla M2070 GPUs using the OpenMP and OpenACC standards. Execution time measurements reveal the efficiency of the parallel implementation.

  6. Hierarchical Representation of Time-Varying Volume Data with Fourth-Root-of-Two Subdivision and Quadrilinear B-Spline Wavelets

    SciTech Connect

    Linsen, L; Pascucci, V; Duchaineau, M A; Hamann, B; Joy, K I

    2002-11-19

    Multiresolution methods for representing data at multiple levels of detail are widely used for large-scale two- and three-dimensional data sets. We present a four-dimensional multiresolution approach for time-varying volume data. This approach supports a hierarchy with spatial and temporal scalability. The hierarchical data organization is based on 4{radical}2 subdivision. The n{radical}2-subdivision scheme only doubles the overall number of grid points in each subdivision step. This fact leads to fine granularity and high adaptivity, which is especially desirable in the spatial dimensions. For high-quality data approximation on each level of detail, we use quadrilinear B-spline wavelets. We present a linear B-spline wavelet lighting scheme based on n{radical}2 subdivision to obtain narrow masks for the update rules. Narrow masks provide a basis for out-of-core data exploration techniques and view-dependent visualization of sequences of time steps.

  7. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  8. Hierarchical T-splines: Analysis-suitability, Bzier extraction, and application as an adaptive basis for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Evans, E. J.; Scott, M. A.; Li, X.; Thomas, D. C.

    2015-02-01

    In this paper hierarchical analysis-suitable T-splines (HASTS) are developed. The resulting spaces are a superset of both analysis-suitable T-splines and hierarchical B-splines. The additional flexibility provided by the hierarchy of T-spline spaces results in simple, highly localized refinement algorithms which can be utilized in a design or analysis context. A detailed theoretical formulation is presented including a proof of local linear independence for analysis-suitable T-splines, a requisite theoretical ingredient for HASTS. B\\'{e}zier extraction is extended to HASTS simplifying the implementation of HASTS in existing finite element codes. The behavior of a simple HASTS refinement algorithm is compared to the local refinement algorithm for analysis-suitable T-splines demonstrating the superior efficiency and locality of the HASTS algorithm. Finally, HASTS are utilized as a basis for adaptive isogeometric analysis.

  9. Multi-processing least squares collocation: Applications to gravity field analysis

    NASA Astrophysics Data System (ADS)

    Kaas, E.; Sørensen, B.; Tscherning, C. C.; Veicherts, M.

    2013-09-01

    Least Squares Collocation (LSC) is used for the modeling of the gravity field, including prediction and error estimation of various quantities. The method requires that as many unknowns as number of data and parameters are solved for. Cholesky reduction must be used in a nonstandard form due to missing positive-definiteness of the equation system. Furthermore the error estimation produces a rectangular or triangular matrix which must be Cholesky reduced in the non-standard manner. LSC have the possibility to add new sets of data without processing previously reduced parts of the equation system. Due to these factors standard Cholesky reduction programs using multi-processing cannot easily be applied. We has therefore implemented Fortran Open Multi-Processing (OpenMP) to the non-standard Cholesky reduction. In the computation of matrix elements (covariances) as well as the evaluation spherical harmonic series used in the remove/restore setting we also take advantage of multi-processing. We describe the implementation using quadratic blocks, which aids in reducing the data transport overhead. Timing results for different block sizes and number of equations are presented. OpenMP scales favorably so that e.g. the prediction and error estimation of grids from GOCE TRF vertical gradient data and ground gravity data can be done in the less than two hours for a 25° by 25° area with data selected close to 0.125° nodes.

  10. A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation

    NASA Astrophysics Data System (ADS)

    Rieser, Daniel; Mayer-Guerr, Torsten

    2014-05-01

    The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.

  11. Vibration suppression in cutting tools using collocated piezoelectric sensors/actuators with an adaptive control algorithm

    SciTech Connect

    Radecki, Peter P; Farinholt, Kevin M; Park, Gyuhae; Bement, Matthew T

    2008-01-01

    The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.

  12. Uncertainty Quantification via Random Domain Decomposition and Probabilistic Collocation on Sparse Grids

    SciTech Connect

    Lin, Guang; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2010-09-01

    Due to lack of knowledge or insufficient data, many physical systems are subject to uncertainty. Such uncertainty occurs on a multiplicity of scales. In this study, we conduct the uncertainty analysis of diffusion in random composites with two dominant scales of uncertainty: Large-scale uncertainty in the spatial arrangement of materials and small-scale uncertainty in the parameters within each material. A general two-scale framework that combines random domain decomposition (RDD) and probabilistic collocation method (PCM) on sparse grids to quantify the large and small scales of uncertainty, respectively. Using sparse grid points instead of standard grids based on full tensor products for both the large and small scales of uncertainty can greatly reduce the overall computational cost, especially for random process with small correlation length (large number of random dimensions). For one-dimensional random contact point problem and random inclusion problem, analytical solution and Monte Carlo simulations have been conducted respectively to verify the accuracy of the combined RDD-PCM approach. Additionally, we employed our combined RDD-PCM approach to two- and three-dimensional examples to demonstrate that our combined RDD-PCM approach provides efficient, robust and nonintrusive approximations for the statistics of diffusion in random composites.

  13. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    PubMed

    Petrinovi?, Davor; Brezovi?, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. PMID:21507749

  14. Full-Relativistic B-Spline R-Matrix Calculations for Electron Collisions with Gold Atoms.

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus; Froese Fischer, Charlotte

    2008-05-01

    We have extended the B-spline R-matrix (close-coupling) method [1] to fully account for relativistic effects in a Dirac-Coulomb formulation. The newly developed computer code has been applied to electron-impact excitation of the (5d^106s)^2S1/2->(5d^106p)^2P1/2,3/2 and (5d^106s)^2S1/2->(5d^96s^2)^2D5/2,3/2 transitions in Au. Our numerical implementation of the close-coupling method enables us to construct term-dependent, non-orthogonal sets of one-electron orbitals for the bound and continuum electrons. This is a critical aspect in the present problem, especially for the 5d and the 6s orbitals. Furthermore, strong core-polarization effects can be accounted for ab initio/ rather than through a semi-empirical and local model potential. Our results will be compared with recent experimental data [2] and predictions from other theoretical approaches [3]. [1] O. Zatsarinny, Comp. Phys. Commun. 174, 273 (2006). [2] M. Maslov, P.J.O. Teubner, and M.J. Brunger, private communication (2008). [3] D.V. Fursa, I. Bray, and R.P. McEachran, private communication (2008).

  15. Miniaturized Multi-Band Antenna via Element Collocation

    SciTech Connect

    Martin, R. P.

    2012-06-01

    The resonant frequency of a microstrip patch antenna may be reduced through the addition of slots in the radiating element. Expanding upon this concept in favor of a significant reduction in the tuned width of the radiator, nearly 60% of the antenna metallization is removed, as seen in the top view of the antenna’s radiating element (shown in red, below, left). To facilitate an increase in the gain of the antenna, the radiator is suspended over the ground plane (green) by an air substrate at a height of 0.250″ while being mechanically supported by 0.030″ thick Rogers RO4003 laminate in the same profile as the element. Although the entire surface of the antenna (red) provides 2.45 GHz operation with insignificant negative effects on performance after material removal, the smaller square microstrip in the middle must be isolated from the additional aperture in order to afford higher frequency operation. A low insertion loss path centered at 2.45 GHz may simultaneously provide considerable attenuation at additional frequencies through the implementation of a series-parallel, resonant reactive path. However, an inductive reactance alone will not permit lower frequency energy to propagate across the intended discontinuity. To mitigate this, a capacitance is introduced in series with the inductor, generating a resonance at 2.45 GHz with minimum forward transmission loss. Four of these reactive pairs are placed between the coplanar elements as shown. Therefore, the aperture of the lower-frequency outer segment includes the smaller radiator while the higher frequency section is isolated from the additional material. In order to avoid cross-polarization losses due to the orientation of a transmitter or receiver in reference to the antenna, circular polarization is realized by a quadrature coupler for each collocated antenna as seen in the bottom view of the antenna (right). To generate electromagnetic radiation concentrically rotating about the direction of propagation, ideally one-half of the power must be delivered to the output of each branch with a phase shift of 90 degrees and identical amplitude. Due to this, each arm of the coupler is spaced λ/4 wavelength apart.

  16. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    SciTech Connect

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-02-02

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data.

  17. An evaluation of prefiltered B-spline reconstruction for quasi-interpolation on the Body-Centered Cubic lattice.

    PubMed

    Csbfalvi, Balzs

    2010-01-01

    In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively. PMID:20224143

  18. Non-parametric regional VTEC modeling with Multivariate Adaptive Regression B-Splines

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslio?lu, Mahmut Onur

    2011-11-01

    In this work Multivariate Adaptive Regression B-Splines (BMARS) is applied to regional spatio-temporal mapping of the Vertical Total Electron Content (VTEC) using ground based Global Positioning System (GPS) observations. BMARS is a non-parametric regression technique that utilizes compactly supported tensor product B-splines as basis functions, which are automatically obtained from the observations. The algorithm uses a scale-by-scale model building strategy that searches for B-splines at each scale fitting adequately to the data and provides smoother approximations than the original Multivariate Adaptive Regression Splines (MARS). It is capable to process high dimensional problems with large amounts of data and can easily be parallelized. The real test data is collected from 32 ground based GPS stations located in North America. The results are compared numerically and visually with both the regional VTEC modeling generated via original MARS using piecewise-linear basis functions and another regional VTEC modeling based on B-splines.

  19. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    SciTech Connect

    Guo, Y.; Keller, J.; Errichello, R.; Halse, C.

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  20. Temporal gravity field modeling based on least square collocation with short-arc approach

    NASA Astrophysics Data System (ADS)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  1. B-spline R-matrix with pseudostates calculations for electron collisions with atomic nitrogen

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Zatsarinny, Oleg; Bartschat, Klaus

    2014-10-01

    The B-spline R-matrix (BSR) with pseudostates method is employed to treat electron collisions with nitrogen atoms. Predictions for elastic scattering, excitation, and ionization are presented for incident energies between threshold and about 100 eV. The largest scattering model included 690 coupled states, most of which were pseudostates that simulate the effect of the high-lying Rydberg spectrum and, most importantly, the ionization continuum on the results for transitions between the discrete physical states of interest. Similar to our recent work on e-C collisions, this effect is particularly strong at ``intermediate'' incident energies of a few times the ionization threshold. Predictions from a number of collision models will be compared with each other and the very limited information currently available in the literature. Estimates for ionization cross sections will also be provided. This work was supported by the China Scholarship Council (Y.W.) and the United States National Science Foundation under Grants PHY-1068140, PHY-1212450, and the XSEDE allocation PHY-090031 (O.Z. and K.B.).

  2. B-spline R-matrix with pseudostates calculations for electron collisions with atomic nitrogen

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Zatsarinny, Oleg; Bartschat, Klaus

    2014-05-01

    The B-spline R-matrix (BSR) with pseudostates method is employed to treat electron collisions with nitrogen atoms. Predictions for elastic scattering, excitation, and ionization are presented for incident energies between threshold and about 100 eV. The largest scattering model included 690 coupled states, most of which were pseudostates that simulate the effect of the high-lying Rydberg spectrum and, most importantly, the ionization continuum on the results for transitions between the discrete physical states of interest. Similar to our recent work on e-C collisions, this effect is particularly strong at ``intermediate'' incident energies of a few times the ionization threshold. Predictions from a number of collision models will be compared with each other and the very limited information currently available in the literature. Estimates for ionization cross sections will also be provided. This work was supported by the China Scholarship Council (Y.W.) and the United States National Science Foundation under grants PHY-1068140, PHY-1212450, and the XSEDE allocation PHY-090031.

  3. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    NASA Astrophysics Data System (ADS)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  4. CFD-based derivative-free optimization using polyharmonic splines, Part 1

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Cavaglieri, Daniele; Bewley, Thomas

    2012-11-01

    Nonsmooth CFD-based optimization problems are difficult, due both to the nonconvexity of the cost function and to the extreme cost of each function evaluation. In this work, we develop a derivative-free GPS optimization scheme which makes maximum use of each function evaluation. We seek to improve on the efficiency of the existing methods that have been applied to this class of problems (genetic algorithms, SMF, orthoMADS, etc). At each optimization step, the algorithm proposed creates a Delaunay triangulation based on the existing evaluation points. In each simplex so created, the algorithm optimizes a cost function based on a polyharmonic spline interpolant. This interpolation strategy behaves appropriately even when the evaluation points are clustered in particular regions of interest in parameter space (in contrast with the Kriging interpolation strategy used in existing GPS/SMF algorithms). At each optimization step, an appropriately-modeled error function is combined with the interpolant, weighted with a tuning parameter governing the trade-off between local refinement and global exploration.

  5. Growth curve analysis for plasma profiles using smoothing splines. Annual progress report, June 1992--June 1993

    SciTech Connect

    Imre, K.

    1993-05-01

    We are developing a profile analysis code for the statistical estimation of the parametric dependencies of the temperature and density profiles in tokamaks. Our code uses advanced statistical techniques to determine the optimal fit, i.e. the fit which minimized the predictive error. For a forty TFTR Ohmic profile dataset, our preliminary results indicate that the profile shape depends almost exclusively on q{sub a}{prime} but that the shape dependencies are not Gaussian. We are now comparing various shape models on the TFTR data. In the first six months, we have completed the core modules of the code, including a B-spline package for variable knot locations, a data-based method to determine the optimal smoothing parameters, self-consistent estimation of the bias errors, and adaptive fitting near the plasma edge. Visualization graphics already include three dimensional surface plots, and discharge by discharge plots of the predicted curves with error bars together with the actual measurements values, and plots of the basis functions with errors.

  6. Evaluating techniques for multivariate classification of non-collocated spatial data.

    SciTech Connect

    McKenna, Sean Andrew

    2004-09-01

    Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach. The non-collocated approaches lead to significantly different group definitions compared to the collocated case. To some extent, these differences can be explained by the kriging variance of the estimated variables. Sequential discrimination of locations with a minimum multivariate kriging variance constraint produces slightly improved results relative to the collection point and the non-hierarchical classification of the estimated vectors.

  7. On the anomaly of velocity-pressure decoupling in collocated mesh solutions

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook; Vanoverbeke, Thomas

    1991-01-01

    The use of various pressure correction algorithms originally developed for fully staggered meshes can yield a velocity-pressure decoupled solution for collocated meshes. The mechanism that causes velocity-pressure decoupling is identified. It is shown that the use of a partial differential equation for the incremental pressure eliminates such a mechanism and yields a velocity-pressure coupled solution. Example flows considered are a three dimensional lid-driven cavity flow and a laminar flow through a 90 deg bend square duct. Numerical results obtained using the collocated mesh are in good agreement with the measured data and other numerical results.

  8. A Corpus-Driven Design of a Test for Assessing the ESL Collocational Competence of University Students

    ERIC Educational Resources Information Center

    Jaen, Maria Moreno

    2007-01-01

    This paper reports an assessment of the collocational competence of students of English Linguistics at the University of Granada. This was carried out to meet a two-fold purpose. On the one hand, we aimed to establish a solid corpus-driven approach based upon a systematic and reliable framework for the evaluation of collocational competence in

  9. The Relationship between Experiential Learning Styles and the Immediate and Delayed Retention of English Collocations among EFL Learners

    ERIC Educational Resources Information Center

    Mohammadzadeh, Afsaneh

    2012-01-01

    This study was carried out to find out if there was any significant difference in learning English collocations by learning with different dominant experiential learning styles. Seventy-five participants took part in the study in which they were taught a series of English collocations. The entry knowledge of the participants with regard to

  10. The Challenge of English Language Collocation Learning in an ES/FL Environment: PRC Students in Singapore

    ERIC Educational Resources Information Center

    Ying, Yang

    2015-01-01

    This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in

  11. An Automatic Collocation Writing Assistant for Taiwanese EFL Learners: A Case of Corpus-Based NLP Technology

    ERIC Educational Resources Information Center

    Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin

    2008-01-01

    Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…

  12. Formulaic Language and Collocations in German Essays: From Corpus-Driven Data to Corpus-Based Materials

    ERIC Educational Resources Information Center

    Krummes, Cedric; Ensslin, Astrid

    2015-01-01

    Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern

  13. Visualization of multidimensional data with collocated paired coordinates and general line coordinates

    NASA Astrophysics Data System (ADS)

    Kovalerchuk, Boris

    2013-12-01

    Often multidimensional data are visualized by splitting n-D data to a set of low dimensional data. While it is useful it destroys integrity of n-D data, and leads to a shallow understanding complex n-D data. To mitigate this challenge a difficult perceptual task of assembling low-dimensional visualized pieces to the whole n-D vectors must be solved. Another way is a lossy dimension reduction by mapping n-D vectors to 2-D vectors (e.g., Principal Component Analysis). Such 2-D vectors carry only a part of information from n-D vectors, without a way to restore n-D vectors exactly from it. An alternative way for deeper understanding of n-D data is visual representations in 2-D that fully preserve n-D data. Methods of Parallel and Radial coordinates are such methods. Developing new methods that preserve dimensions is a long standing and challenging task that we address by proposing Paired Coordinates that is a new type of n-D data visual representation and by generalizing Parallel and Radial coordinates as a General Line coordinates. The important novelty of the concept of the Paired Coordinates is that it uses a single 2-D plot to represent n-D data as an oriented graph based on the idea of collocation of pairs of attributes. The advantage of the General Line Coordinates and Paired Coordinates is in providing a common framework that includes Parallel and Radial coordinates and generating a large number of new visual representations of multidimensional data without lossy dimension reduction.

  14. Towards a More General Type of Univariate Constrained Interpolation with Fractal Splines

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Viswanathan, P.; Reddy, K. M.

    2015-09-01

    Recently, in [Electron. Trans. Numer. Anal. 41 (2014) 420-442] authors introduced a new class of rational cubic fractal interpolation functions with linear denominators via fractal perturbation of traditional nonrecursive rational cubic splines and investigated their basic shape preserving properties. The main goal of the current paper is to embark on univariate constrained fractal interpolation that is more general than what was considered so far. To this end, we propose some strategies for selecting the parameters of the rational fractal spline so that the interpolating curves lie strictly above or below a prescribed linear or a quadratic spline function. Approximation property of the proposed rational cubic fractal spine is broached by using the Peano kernel theorem as an interlude. The paper also provides an illustration of background theory, veined by examples.

  15. Penalized splines for smooth representation of high-dimensional Monte Carlo datasets

    NASA Astrophysics Data System (ADS)

    Whitehorn, Nathan; van Santen, Jakob; Lafebre, Sven

    2013-09-01

    Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique (Marx and Eilers, 1996) [1] to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.

  16. Material approximation of data smoothing and spline curves inspired by slime mould.

    PubMed

    Jones, Jeff; Adamatzky, Andrew

    2014-09-01

    The giant single-celled slime mould Physarum polycephalum is known to approximate a number of network problems via growth and adaptation of its protoplasmic transport network and can serve as an inspiration towards unconventional, material-based computation. In Physarum, predictable morphological adaptation is prevented by its adhesion to the underlying substrate. We investigate what possible computations could be achieved if these limitations were removed and the organism was free to completely adapt its morphology in response to changing stimuli. Using a particle model of Physarum displaying emergent morphological adaptation behaviour, we demonstrate how a minimal approach to collective material computation may be used to transform and summarise properties of spatially represented datasets. We find that the virtual material relaxes more strongly to high-frequency changes in data, which can be used for the smoothing (or filtering) of data by approximating moving average and low-pass filters in 1D datasets. The relaxation and minimisation properties of the model enable the spatial computation of B-spline curves (approximating splines) in 2D datasets. Both clamped and unclamped spline curves of open and closed shapes can be represented, and the degree of spline curvature corresponds to the relaxation time of the material. The material computation of spline curves also includes novel quasi-mechanical properties, including unwinding of the shape between control points and a preferential adhesion to longer, straighter paths. Interpolating splines could not directly be approximated due to the formation and evolution of Steiner points at narrow vertices, but were approximated after rectilinear pre-processing of the source data. This pre-processing was further simplified by transforming the original data to contain the material inside the polyline. These exemplary results expand the repertoire of spatially represented unconventional computing devices by demonstrating a simple, collective and distributed approach to data and curve smoothing. PMID:24979075

  17. Generation of Knot Net for Calculation of Quadratic Triangular B-spline Surface of Human Head

    NASA Astrophysics Data System (ADS)

    Mihalk, Jn

    2011-09-01

    This paper deals with calculation of the quadratic triangular B-spline surface of the human head for the purpose of its modeling in the standard videocodec MPEG-4 SNHC. In connection with this we propose an algorithm of generation of the knot net and present the results of its application for triangulation of the 3D polygonal model Candide. Then for the model and generated knot net as well as an established distribution of control points we show the results of the calculated quadratic triangular B-spline surface of the human head including its textured version for the texture of the selected avatar.

  18. Blade geometry description using B-splines and general surfaces of revolution

    NASA Astrophysics Data System (ADS)

    Miller, Perry Lennox, IV

    This thesis presents a comprehensive set of algorithms, techniques, and accompanying diagrams for the geometric description of turbomachinery blades. Many of the techniques presented are new and cannot be found in the literature. The geometry construction process is broken into several steps. Each of these steps has variations that constrain different design parameters, enabling design-specific constraints to be met. The process of mapping curves from two-dimensional space to three-dimensional space is described in detail. All the algorithms are designed to work with B-spline curves (and in some cases NURBS curves) and produce a B-spline surface for the blade.

  19. Pattern recognition and lithological interpretation of collocated seismic and magnetotelluric models using self-organizing maps

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Muoz, G.; Moeck, I.

    2012-05-01

    Joint interpretation of models from seismic tomography and inversion of magnetotelluric (MT) data is an efficient approach to determine the lithology of the subsurface. Statistical methods are well established but were developed for only two types of models so far (seismic P velocity and electrical resistivity). We apply self-organizing maps (SOMs), which have no limitations in the number of parameters considered in the joint interpretation. Our SOM method includes (1) generation of data vectors from the seismic and MT images, (2) unsupervised learning, (3) definition of classes by algorithmic segmentation of the SOM using image processing techniques and (4) application of learned knowledge to classify all data vectors and assign a lithological interpretation for each data vector. We apply the workflow to collocated P velocity, vertical P-velocity gradient and resistivity models derived along a 40 km profile around the geothermal site Gro Schnebeck in the Northeast German Basin. The resulting lithological model consists of eight classes covering Cenozoic, Mesozoic and Palaeozoic sediments down to 5 km depth. There is a remarkable agreement between the litho-type distribution from the SOM analysis and regional marker horizons interpolated from sparse 2-D industrial reflection seismic data. The most interesting features include (1) characteristic properties of the Jurassic (low P-velocity gradients, low resistivity values) interpreted as the signature of shales, and (2) a pattern within the Upper Permian Zechstein layer with low resistivity and increased P-velocity values within the salt depressions and increased resistivity and decreased P velocities in the salt pillows. The latter is explained in our interpretation by flow of less dense salt matrix components to form the pillows while denser and more brittle evaporites such as anhydrite remain in place during the salt mobilization.

  20. Your Participation Is "Greatly/Highly" Appreciated: Amplifier Collocations in L2 English

    ERIC Educational Resources Information Center

    Edmonds, Amanda; Gudmestad, Aarnes

    2014-01-01

    The current study sets out to investigate collocational knowledge for a set of 13 English amplifiers among native and nonnative speakers of English, by providing a partial replication of one of the projects reported on in Granger (1998). The project combines both phraseological and distributional approaches to research into formulaic language to…