Science.gov

Sample records for spline collocation method

  1. Schwarz and multilevel methods for quadratic spline collocation

    SciTech Connect

    Christara, C.C.; Smith, B.

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  2. Tensorial Basis Spline Collocation Method for Poisson's Equation

    NASA Astrophysics Data System (ADS)

    Plagne, Laurent; Berthou, Jean-Yves

    2000-01-01

    This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.

  3. Numerical solution of differential-algebraic equations using the spline collocation-variation method

    NASA Astrophysics Data System (ADS)

    Bulatov, M. V.; Rakhvalov, N. P.; Solovarova, L. S.

    2013-03-01

    Numerical methods for solving initial value problems for differential-algebraic equations are proposed. The approximate solution is represented as a continuous vector spline whose coefficients are found using the collocation conditions stated for a subgrid with the number of collocation points less than the degree of the spline and the minimality condition for the norm of this spline in the corresponding spaces. Numerical results for some model problems are presented.

  4. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  5. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    SciTech Connect

    Kim, Sang Dong

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  6. Quadratic spline collocation and parareal deferred correction method for parabolic PDEs

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, Yan; Li, Rongjian

    2016-06-01

    In this paper, we consider a linear parabolic PDE, and use optimal quadratic spline collocation (QSC) methods for the space discretization, proceed the parareal technique on the time domain. Meanwhile, deferred correction technique is used to improve the accuracy during the iterations. The error estimation is presented and the stability is analyzed. Numerical experiments, which is carried out on a parallel computer with 40 CPUs, are attached to exhibit the effectiveness of the hybrid algorithm.

  7. An ADI extrapolated Crank-Nicolson orthogonal spline collocation method for nonlinear reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Fernandes, Ryan I.; Fairweather, Graeme

    2012-08-01

    An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.

  8. Cubic Spline Collocation Method for the Simulation of Turbulent Thermal Convection in Compressible Fluids

    SciTech Connect

    Castillo, Victor Manuel

    1999-01-01

    A collocation method using cubic splines is developed and applied to simulate steady and time-dependent, including turbulent, thermally convecting flows for two-dimensional compressible fluids. The state variables and the fluxes of the conserved quantities are approximated by cubic splines in both space direction. This method is shown to be numerically conservative and to have a local truncation error proportional to the fourth power of the grid spacing. A ''dual-staggered'' Cartesian grid, where energy and momentum are updated on one grid and mass density on the other, is used to discretize the flux form of the compressible Navier-Stokes equations. Each grid-line is staggered so that the fluxes, in each direction, are calculated at the grid midpoints. This numerical method is validated by simulating thermally convecting flows, from steady to turbulent, reproducing known results. Once validated, the method is used to investigate many aspects of thermal convection with high numerical accuracy. Simulations demonstrate that multiple steady solutions can coexist at the same Rayleigh number for compressible convection. As a system is driven further from equilibrium, a drop in the time-averaged dimensionless heat flux (and the dimensionless internal entropy production rate) occurs at the transition from laminar-periodic to chaotic flow. This observation is consistent with experiments of real convecting fluids. Near this transition, both harmonic and chaotic solutions may exist for the same Rayleigh number. The chaotic flow loses phase-space information at a greater rate, while the periodic flow transports heat (produces entropy) more effectively. A linear sum of the dimensionless forms of these rates connects the two flow morphologies over the entire range for which they coexist. For simulations of systems with higher Rayleigh numbers, a scaling relation exists relating the dimensionless heat flux to the two-seventh's power of the Rayleigh number, suggesting the

  9. Quartic B-spline collocation method applied to Korteweg de Vries equation

    NASA Astrophysics Data System (ADS)

    Zin, Shazalina Mat; Majid, Ahmad Abd; Ismail, Ahmad Izani Md

    2014-07-01

    The Korteweg de Vries (KdV) equation is known as a mathematical model of shallow water waves. The general form of this equation is ut+ɛuux+μuxxx = 0 where u(x,t) describes the elongation of the wave at displacement x and time t. In this work, one-soliton solution for KdV equation has been obtained numerically using quartic B-spline collocation method for displacement x and using finite difference approach for time t. Two problems have been identified to be solved. Approximate solutions and errors for these two test problems were obtained for different values of t. In order to look into accuracy of the method, L2-norm and L∞-norm have been calculated. Mass, energy and momentum of KdV equation have also been calculated. The results obtained show the present method can approximate the solution very well, but as time increases, L2-norm and L∞-norm are also increase.

  10. The basis spline method and associated techniques

    SciTech Connect

    Bottcher, C.; Strayer, M.R.

    1989-01-01

    We outline the Basis Spline and Collocation methods for the solution of Partial Differential Equations. Particular attention is paid to the theory of errors, and the handling of non-self-adjoint problems which are generated by the collocation method. We discuss applications to Poisson's equation, the Dirac equation, and the calculation of bound and continuum states of atomic and nuclear systems. 12 refs., 6 figs.

  11. Spline methods for conversation equations

    SciTech Connect

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs.

  12. Spectral collocation methods

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.

    1987-01-01

    This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.

  13. A Collocation Method for Numerical Solutions of Coupled Burgers' Equations

    NASA Astrophysics Data System (ADS)

    Mittal, R. C.; Tripathi, A.

    2014-09-01

    In this paper, we propose a collocation-based numerical scheme to obtain approximate solutions of coupled Burgers' equations. The scheme employs collocation of modified cubic B-spline functions. We have used modified cubic B-spline functions for unknown dependent variables u, v, and their derivatives w.r.t. space variable x. Collocation forms of the partial differential equations result in systems of first-order ordinary differential equations (ODEs). In this scheme, we did not use any transformation or linearization method to handle nonlinearity. The obtained system of ODEs has been solved by strong stability preserving the Runge-Kutta method. The proposed scheme needs less storage space and execution time. The test problems considered in the literature have been discussed to demonstrate the strength and utility of the proposed scheme. The computed numerical solutions are in good agreement with the exact solutions and competent with those available in earlier studies. The scheme is simple as well as easy to implement. The scheme provides approximate solutions not only at the grid points, but also at any point in the solution range.

  14. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  15. Splines and the Galerkin method for solving the integral equations of scattering theory

    NASA Astrophysics Data System (ADS)

    Brannigan, M.; Eyre, D.

    1983-06-01

    This paper investigates the Galerkin method with cubic B-spline approximants to solve singular integral equations that arise in scattering theory. We stress the relationship between the Galerkin and collocation methods.The error bound for cubic spline approximates has a convergence rate of O(h4), where h is the mesh spacing. We test the utility of the Galerkin method by solving both two- and three-body problems. We demonstrate, by solving the Amado-Lovelace equation for a system of three identical bosons, that our numerical treatment of the scattering problem is both efficient and accurate for small linear systems.

  16. A Semi-Implicit, Fourier-Galerkin/B-Spline Collocation Approach for DNS of Compressible, Reacting, Wall-Bounded Flow

    NASA Astrophysics Data System (ADS)

    Oliver, Todd; Ulerich, Rhys; Topalian, Victor; Malaya, Nick; Moser, Robert

    2013-11-01

    A discretization of the Navier-Stokes equations appropriate for efficient DNS of compressible, reacting, wall-bounded flows is developed and applied. The spatial discretization uses a Fourier-Galerkin/B-spline collocation approach. Because of the algebraic complexity of the constitutive models involved, a flux-based approach is used where the viscous terms are evaluated using repeated application of the first derivative operator. In such an approach, a filter is required to achieve appropriate dissipation at high wavenumbers. We formulate a new filter source operator based on the viscous operator. Temporal discretization is achieved using the SMR91 hybrid implicit/explicit scheme. The linear implicit operator is chosen to eliminate wall-normal acoustics from the CFL constraint while also decoupling the species equations from the remaining flow equations, which minimizes the cost of the required linear algebra. Results will be shown for a mildly supersonic, multispecies boundary layer case inspired by the flow over the ablating surface of a space capsule entering Earth's atmosphere. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  17. Collocation methods for distillation design. 1: Model description and testing

    SciTech Connect

    Huss, R.S.; Westerberg, A.W.

    1996-05-01

    Fast and accurate distillation design requires a model that significantly reduces the problem size while accurately approximating a full-order distillation column model. This collocation model builds on the concepts of past collocation models for design of complex real-world separation systems. Two variable transformations make this method unique. Polynomials cannot accurately fit trajectories which flatten out. In columns, flat sections occur in the middle of large column sections or where concentrations go to 0 or 1. With an exponential transformation of the tray number which maps zero to an infinite number of trays onto the range 0--1, four collocation trays can accurately simulate a large column section. With a hyperbolic tangent transformation of the mole fractions, the model can simulate columns which reach high purities. Furthermore, this model uses multiple collocation elements for a column section, which is more accurate than a single high-order collocation section.

  18. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  19. Collocation method for fractional quantum mechanics

    SciTech Connect

    Amore, Paolo; Hofmann, Christoph P.; Saenz, Ricardo A.; Fernandez, Francisco M.

    2010-12-15

    We show that it is possible to obtain numerical solutions to quantum mechanical problems involving a fractional Laplacian, using a collocation approach based on little sinc functions, which discretizes the Schroedinger equation on a uniform grid. The different boundary conditions are naturally implemented using sets of functions with the appropriate behavior. Good convergence properties are observed. A comparison with results based on a Wentzel-Kramers-Brillouin analysis is performed.

  20. Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation

    SciTech Connect

    Ismail, M. S.

    2010-09-30

    The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.

  1. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  2. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  3. Collocation methods for distillation design. 2: Applications for distillation

    SciTech Connect

    Huss, R.S.; Westerberg, A.W.

    1996-05-01

    The authors present applications for a collocation method for modeling distillation columns that they developed in a companion paper. They discuss implementation of the model, including discussion of the ASCEND (Advanced System for Computations in ENgineering Design) system, which enables one to create complex models with simple building blocks and interactively learn to solve them. They first investigate applying the model to compute minimum reflux for a given separation task, exactly solving nonsharp and approximately solving sharp split minimum reflux problems. They next illustrate the use of the collocation model to optimize the design a single column capable of carrying out a prescribed set of separation tasks. The optimization picks the best column diameter and total number of trays. It also picks the feed tray for each of the prescribed separations.

  4. Pseudospectral collocation methods for fourth order differential equations

    NASA Technical Reports Server (NTRS)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  5. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  6. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  7. Acoustic scattering by multiple elliptical cylinders using collocation multipole method

    NASA Astrophysics Data System (ADS)

    Lee, Wei-Ming

    2012-05-01

    This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical-cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.

  8. The chain collocation method: A spectrally accurate calculus of forms

    NASA Astrophysics Data System (ADS)

    Rufat, Dzhelil; Mason, Gemma; Mullen, Patrick; Desbrun, Mathieu

    2014-01-01

    Preserving in the discrete realm the underlying geometric, topological, and algebraic structures at stake in partial differential equations has proven to be a fruitful guiding principle for numerical methods in a variety of fields such as elasticity, electromagnetism, or fluid mechanics. However, structure-preserving methods have traditionally used spaces of piecewise polynomial basis functions for differential forms. Yet, in many problems where solutions are smoothly varying in space, a spectral numerical treatment is called for. In an effort to provide structure-preserving numerical tools with spectral accuracy on logically rectangular grids over periodic or bounded domains, we present a spectral extension of the discrete exterior calculus (DEC), with resulting computational tools extending well-known collocation-based spectral methods. Its efficient implementation using fast Fourier transforms is provided as well.

  9. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  10. A numerical method of finding potentiometric titration end-points by use of approximative spline functions.

    PubMed

    Ren, K

    1990-07-01

    A new numerical method of determining potentiometric titration end-points is presented. It consists in calculating the coefficients of approximative spline functions describing the experimental data (e.m.f., volume of titrant added). The end-point (the inflection point of the curve) is determined by calculating zero points of the second derivative of the approximative spline function. This spline function, unlike rational spline functions, is free from oscillations and its course is largely independent of random errors in e.m.f. measurements. The proposed method is useful for direct analysis of titration data and especially as a basis for construction of microcomputer-controlled automatic titrators. PMID:18964999

  11. Multi-element probabilistic collocation method in high dimensions

    SciTech Connect

    Foo, Jasmine; Karniadakis, George Em

    2010-03-01

    We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order {mu} and the effective dimension {nu}, with {nu}<=}{mu} for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

  12. Spline interpolation on unbounded domains

    NASA Astrophysics Data System (ADS)

    Skeel, Robert D.

    2016-06-01

    Spline interpolation is a splendid tool for multiscale approximation on unbounded domains. In particular, it is well suited for use by the multilevel summation method (MSM) for calculating a sum of pairwise interactions for a large set of particles in linear time. Outlined here is an algorithm for spline interpolation on unbounded domains that is efficient and elegant though not so simple. Further gains in efficiency are possible via quasi-interpolation, which compromises collocation but with minimal loss of accuracy. The MSM, which may also be of value for continuum models, embodies most of the best features of both hierarchical clustering methods (tree methods, fast multipole methods, hierarchical matrix methods) and FFT-based 2-level methods (particle-particle particle-mesh methods, particle-mesh Ewald methods).

  13. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  14. Uncertainty Quantification in State Estimation using the Probabilistic Collocation Method

    SciTech Connect

    Lin, Guang; Zhou, Ning; Ferryman, Thomas A.; Tuffner, Francis K.

    2011-03-23

    In this study, a new efficient uncertainty quantification technique, probabilistic collocation method (PCM) on sparse grid points is employed to enable the evaluation of uncertainty in state estimation. The PCM allows us to use just a small number of ensembles to quantify the uncertainty in estimating the state variables of power systems. By sparse grid points, the PCM approach can handle large number of uncertain parameters in power systems with relatively lower computational cost, when comparing with classic Monte Carlo (MC) simulations. The algorithm and procedure is outlined and we demonstrate the capability and illustrate the application of PCM on sparse grid points approach on uncertainty quantification in state estimation of the IEEE 14 bus model as an example. MC simulations have also been conducted to verify accuracy of the PCM approach. By comparing the results obtained from MC simulations with PCM results for mean and standard deviation of uncertain parameters, it is evident that the PCM approach is computationally more efficient than MC simulations.

  15. Bicubic B-spline interpolation method for two-dimensional heat equation

    NASA Astrophysics Data System (ADS)

    Hamid, Nur Nadiah Abd.; Majid, Ahmad Abd.; Ismail, Ahmad Izani Md.

    2015-10-01

    Two-dimensional heat equation was solved using bicubic B-spline interpolation method. An arbitrary surface equation was generated by bicubic B-spline equation. This equation was incorporated in the heat equation after discretizing the time using finite difference method. An under-determined system of linear equation was obtained and solved to obtain the approximate analytical solution for the problem. This method was tested on one example.

  16. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces

    SciTech Connect

    Liu, Yi-Xin Zhang, Hong-Dong

    2014-06-14

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.

  17. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Xin; Zhang, Hong-Dong

    2014-06-01

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.

  18. THE LOSS OF ACCURACY OF STOCHASTIC COLLOCATION METHOD IN SOLVING NONLINEAR DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S

    2013-01-01

    n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.

  19. Segment and spline synthesis optimization method for LED based freeform total-internal-reflection lens design

    NASA Astrophysics Data System (ADS)

    Chen, Enguo; Zhuang, Zhenfeng; Cai, Jin; Liu, Yan; Yu, Feihong

    2012-10-01

    This paper presents a segment and spline synthesis optimization method (SSS method) for the freeform total-internal-reflection (TIR) lens design. Before the optimization starts, a series of discrete control points are used to describe the TIR lens profile. In order to realize initial optimization, the segment method is applied to optimize a linear-segmented TIR lens. The final optimization is further achieved by the spline optimization method, after which the cubic-spline-modeling TIR lens with the characteristic of low cost and easy fabrication could satisfy the target illumination requirements. The detailed design principle and optimization process of the SSS method are both analyzed and compared in the paper. Complementing each other, the synthesis of the segment and spline optimization method could realize the prescribed design and greatly improve the design efficiency for designers. As an example, the specially designed polymethyl methacrylate (PMMA) freeform TIR lens used for LED general lighting could demonstrate the effectiveness of this method. The uniformity of the lens significantly increases from 67% to 88% after the segment and spline method, respectively. High light output efficiency (LOE) of 99.3% is available within the target illumination area for the final lens system. It is believed that the SSS method could be applied to design other freeform illumination optics.

  20. Method and machine for splining clutch hubs with close tolerance spline bellmouth and oil seal surface roundness

    SciTech Connect

    Hill, G.R.

    1987-11-10

    A power transmission member is described comprising a radially-extending end wall and a cylindrical axially-extending sleeve connected to the end wall and terminating remote from the end wall in an open end. The sleeve has pressure formed internal and external axially-extending splines formed therein by intermeshing of teeth of a mandrel on which the sleeve is mounted and teeth of a pair of racks slidable therepast. The splines terminate short of the open sleeve end in an unsplined cylindrical ring-shaped lip portion which reduced bellmouth of the splines to within about 0.010 inch along their length.

  1. Fast Spectral Collocation Method for Surface Integral Equations of Potential Problems in a Spheroid

    PubMed Central

    Xu, Zhenli; Cai, Wei

    2009-01-01

    This paper proposes a new technique to speed up the computation of the matrix of spectral collocation discretizations of surface single and double layer operators over a spheroid. The layer densities are approximated by a spectral expansion of spherical harmonics and the spectral collocation method is then used to solve surface integral equations of potential problems in a spheroid. With the proposed technique, the computation cost of collocation matrix entries is reduced from 𝒪(M2N4) to 𝒪(MN4), where N2 is the number of spherical harmonics (i.e., size of the matrix) and M is the number of one-dimensional integration quadrature points. Numerical results demonstrate the spectral accuracy of the method. PMID:20414359

  2. A compressed primal-dual method for generating bivariate cubic L1 splines

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Fang, Shu-Cherng; Lavery, John E.

    2007-04-01

    In this paper, we develop a compressed version of the primal-dual interior point method for generating bivariate cubic L1 splines. Discretization of the underlying optimization model, which is a nonsmooth convex programming problem, leads to an overdetermined linear system that can be handled by interior point methods. Taking advantage of the special matrix structure of the cubic L1 spline problem, we design a compressed primal-dual interior point algorithm. Computational experiments indicate that this compressed primal-dual method is robust and is much faster than the ordinary (uncompressed) primal-dual interior point algorithm.

  3. NOKIN1D: one-dimensional neutron kinetics based on a nodal collocation method

    NASA Astrophysics Data System (ADS)

    Verdú, G.; Ginestar, D.; Miró, R.; Jambrina, A.; Barrachina, T.; Soler, Amparo; Concejal, Alberto

    2014-06-01

    The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method.

  4. A configurable B-spline parameterization method for structural optimization of wing boxes

    NASA Astrophysics Data System (ADS)

    Yu, Alan Tao

    2009-12-01

    This dissertation presents a synthesis of methods for structural optimization of aircraft wing boxes. The optimization problem considered herein is the minimization of structural weight with respect to component sizes, subject to stress constraints. Different aspects of structural optimization methods representing the current state-of-the-art are discussed, including sequential quadratic programming, sensitivity analysis, parameterization of design variables, constraint handling, and multiple load treatment. Shortcomings of the current techniques are identified and a B-spline parameterization representing the structural sizes is proposed to address them. A new configurable B-spline parameterization method for structural optimization of wing boxes is developed that makes it possible to flexibly explore design spaces. An automatic scheme using different levels of B-spline parameterization configurations is also proposed, along with a constraint aggregation method in order to reduce the computational effort. Numerical results are compared to evaluate the effectiveness of the B-spline approach and the constraint aggregation method. To evaluate the new formulations and explore design spaces, the wing box of an airliner is optimized for the minimum weight subject to stress constraints under multiple load conditions. The new approaches are shown to significantly reduce the computational time required to perform structural optimization and to yield designs that are more realistic than existing methods.

  5. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  6. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  7. A novel stochastic collocation method for uncertainty propagation in complex mechanical systems

    NASA Astrophysics Data System (ADS)

    Qi, WuChao; Tian, SuMei; Qiu, ZhiPing

    2015-02-01

    This paper presents a novel stochastic collocation method based on the equivalent weak form of multivariate function integral to quantify and manage uncertainties in complex mechanical systems. The proposed method, which combines the advantages of the response surface method and the traditional stochastic collocation method, only sets integral points at the guide lines of the response surface. The statistics, in an engineering problem with many uncertain parameters, are then transformed into a linear combination of simple functions' statistics. Furthermore, the issue of determining a simple method to solve the weight-factor sets is discussed in detail. The weight-factor sets of two commonly used probabilistic distribution types are given in table form. Studies on the computational accuracy and efforts show that a good balance in computer capacity is achieved at present. It should be noted that it's a non-gradient and non-intrusive algorithm with strong portability. For the sake of validating the procedure, three numerical examples concerning a mathematical function with analytical expression, structural design of a straight wing, and flutter analysis of a composite wing are used to show the effectiveness of the guided stochastic collocation method.

  8. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  9. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    SciTech Connect

    Avila, Gustavo Carrington, Tucker

    2015-12-07

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.

  10. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    NASA Astrophysics Data System (ADS)

    Avila, Gustavo; Carrington, Tucker

    2015-12-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.

  11. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra.

    PubMed

    Avila, Gustavo; Carrington, Tucker

    2015-12-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates. PMID:26646870

  12. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  13. B-spline modal method: a polynomial approach compared to the Fourier modal method.

    PubMed

    Walz, Michael; Zebrowski, Thomas; Küchenmeister, Jens; Busch, Kurt

    2013-06-17

    A detailed analysis of the B-spline Modal Method (BMM) for one- and two-dimensional diffraction gratings and a comparison to the Fourier Modal Method (FMM) is presented. Owing to its intrinsic capability to accurately resolve discontinuities, BMM avoids the notorious problems of FMM that are associated with the Gibbs phenomenon. As a result, BMM facilitates significantly more efficient eigenmode computations. With regard to BMM-based transmission and reflection computations, it is demonstrated that a novel Galerkin approach (in conjunction with a scattering-matrix algorithm) allows for an improved field matching between different layers. This approach is superior relative to the traditional point-wise field matching. Moreover, only this novel Galerkin approach allows for an competitive extension of BMM to the case of two-dimensional diffraction gratings. These improvements will be very useful for high-accuracy grating computations in general and for the analysis of associated electromagnetic field profiles in particular.

  14. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  15. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  16. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  17. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    SciTech Connect

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-11-20

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.

  18. Solute transport via alternating-direction collocation using the modified method of characteristics

    NASA Astrophysics Data System (ADS)

    Allen, Myron B.; Khosravani, Azar

    We present a finite-element collocation method for modeling underground solute transport in two space dimensions when advection is dominant. The scheme uses a modified method of characteristics to approximate advective terms, thereby reducing the temporal truncation error and allowing accurate transport of solute by the velocity field. In conjunction with this approach, we employ an alternating-direction algorithm to yield a highly parallelizable algorithm for solving two-dimensional problems as sequences of simpler problems having one-dimensional matrix structure.

  19. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  20. Numerical Algorithm Based on Haar-Sinc Collocation Method for Solving the Hyperbolic PDEs

    PubMed Central

    Javadi, H. H. S.; Navidi, H. R.

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  1. Numerical algorithm based on Haar-Sinc collocation method for solving the hyperbolic PDEs.

    PubMed

    Pirkhedri, A; Javadi, H H S; Navidi, H R

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  2. A time domain collocation method for studying the aeroelasticity of a two dimensional airfoil with a structural nonlinearity

    NASA Astrophysics Data System (ADS)

    Dai, Honghua; Yue, Xiaokui; Yuan, Jianping; Atluri, Satya N.

    2014-08-01

    A time domain collocation method for the study of the motion of a two dimensional aeroelastic airfoil with a cubic structural nonlinearity is presented. This method first transforms the governing ordinary differential equations into a system of nonlinear algebraic equations (NAEs), which are then solved by a Jacobian-inverse-free NAE solver. Using the aeroelastic airfoil as a prototypical system, the time domain collocation method is shown here to be mathematically equivalent to the well known high dimensional harmonic balance method. Based on the fact that the high dimensional harmonic balance method is essentially a collocation method in disguise, we clearly explain the aliasing phenomenon of the high dimensional harmonic balance method. On the other hand, the conventional harmonic balance method is also applied. Previous studies show that the harmonic balance method does not produce aliasing in the framework of solving the Duffing equation. However, we demonstrate that a mathematical type of aliasing occurs in the harmonic balance method for the present self-excited nonlinear dynamical system. Besides, a parameter marching procedure is used to sufficiently eliminate the effects of aliasing pertaining to the time domain collocation method. Moreover, the accuracy of the time domain collocation method is compared with the harmonic balance method.

  3. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  4. Collocations in Language Learning: Corpus-Based Automatic Compilation of Collocations and Bilingual Collocation Concordancer.

    ERIC Educational Resources Information Center

    Kita, Kenji; Ogata, Hiroaki

    1997-01-01

    Presents an efficient method for extracting collocations from corpora, which uses the cost criteria measure and a tree-based data structure. Proposes a bilingual collocation concordancer, a tool that provides language learners with collocation correspondences between a native and foreign language. (Eight references) (Author/CK)

  5. Meshless collocation methods for the numerical solution of elliptic boundary valued problems the rotational shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Blakely, Christopher D.

    This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.

  6. A meshfree local RBF collocation method for anti-plane transverse elastic wave propagation analysis in 2D phononic crystals

    NASA Astrophysics Data System (ADS)

    Zheng, Hui; Zhang, Chuanzeng; Wang, Yuesheng; Sladek, Jan; Sladek, Vladimir

    2016-01-01

    In this paper, a meshfree or meshless local radial basis function (RBF) collocation method is proposed to calculate the band structures of two-dimensional (2D) anti-plane transverse elastic waves in phononic crystals. Three new techniques are developed for calculating the normal derivative of the field quantity required by the treatment of the boundary conditions, which improve the stability of the local RBF collocation method significantly. The general form of the local RBF collocation method for a unit-cell with periodic boundary conditions is proposed, where the continuity conditions on the interface between the matrix and the scatterer are taken into account. The band structures or dispersion relations can be obtained by solving the eigenvalue problem and sweeping the boundary of the irreducible first Brillouin zone. The proposed local RBF collocation method is verified by using the corresponding results obtained with the finite element method. For different acoustic impedance ratios, various scatterer shapes, scatterer arrangements (lattice forms) and material properties, numerical examples are presented and discussed to show the performance and the efficiency of the developed local RBF collocation method compared to the FEM for computing the band structures of 2D phononic crystals.

  7. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  8. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  9. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  10. Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method

    NASA Astrophysics Data System (ADS)

    Habel, Branislav; Janak, Juraj

    2014-05-01

    A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.

  11. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  12. A Haar wavelet collocation method for coupled nonlinear Schrödinger-KdV equations

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer; Esen, Alaattin; Bulut, Fatih

    2016-04-01

    In this paper, to obtain accurate numerical solutions of coupled nonlinear Schrödinger-Korteweg-de Vries (KdV) equations a Haar wavelet collocation method is proposed. An explicit time stepping scheme is used for discretization of time derivatives and nonlinear terms that appeared in the equations are linearized by a linearization technique and space derivatives are discretized by Haar wavelets. In order to test the accuracy and reliability of the proposed method L2, L∞ error norms and conserved quantities are used. Also obtained results are compared with previous ones obtained by finite element method, Crank-Nicolson method and radial basis function meshless methods. Error analysis of Haar wavelets is also given.

  13. A Chebyshev Collocation Method for Moving Boundaries, Heat Transfer, and Convection During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.

    1994-01-01

    Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.

  14. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-01-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  15. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-11-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  16. A novel monocular visual navigation method for cotton-picking robot based on horizontal spline segmentation

    NASA Astrophysics Data System (ADS)

    Xu, ShengYong; Wu, JuanJuan; Zhu, Li; Li, WeiHao; Wang, YiTian; Wang, Na

    2015-12-01

    Visual navigation is a fundamental technique of intelligent cotton-picking robot. There are many components and cover in the cotton field, which make difficulties of furrow recognition and trajectory extraction. In this paper, a new field navigation path extraction method is presented. Firstly, the color image in RGB color space is pre-processed by the OTSU threshold algorithm and noise filtering. Secondly, the binary image is divided into numerous horizontally spline areas. In each area connected regions of neighboring images' vertical center line are calculated by the Two-Pass algorithm. The center points of the connected regions are candidate points for navigation path. Thirdly, a series of navigation points are determined iteratively on the principle of the nearest distance between two candidate points in neighboring splines. Finally, the navigation path equation is fitted by the navigation points using the least squares method. Experiments prove that this method is accurate and effective. It is suitable for visual navigation in the complex environment of cotton field in different phases.

  17. Extended cubic B-spline method for solving a linear system of second-order boundary value problems.

    PubMed

    Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md

    2016-01-01

    A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods. PMID:27547688

  18. Extended cubic B-spline method for solving a linear system of second-order boundary value problems.

    PubMed

    Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md

    2016-01-01

    A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods.

  19. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    SciTech Connect

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-06-20

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  20. Probabilistic collocation method for strongly nonlinear problems: 3. Transform by time

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao

    2016-03-01

    The probabilistic collocation method (PCM) has drawn wide attention for stochastic analysis recently. Its results may become inaccurate in case of a strongly nonlinear relation between random parameters and model responses. To tackle this problem, we proposed a location-based transformed PCM (xTPCM) and a displacement-based transformed PCM (dTPCM) in previous parts of this series. Making use of the transform between response and space, the above two methods, however, have certain limitations. In this study, we introduce a time-based transformed PCM (tTPCM) employing the transform between response and time. We conduct numerical experiments to investigate its performance in uncertainty quantification. The results show that the tTPCM greatly improves the accuracy of the traditional PCM in a cost-effective manner and is more general and convenient than the xTPCM/dTPCM.

  1. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  2. A new numerical method of finding potentiometric titration end-points by use of rational spline functions.

    PubMed

    Ren, K; Ren-Kurc, A

    1986-08-01

    A new numerical method of determining the position of the inflection point of a potentiometric titration curve is presented. It consists of describing the experimental data (emf, volume data-points) by means of a rational spline function. The co-ordinates of the titration end-point are determined by analysis of the first and second derivatives of the spline function formed. The method also allows analysis of distorted titration curves which cannot be interpreted by Gran's or other computational methods. PMID:18964159

  3. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.

    PubMed

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases.

  4. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.

    PubMed

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903

  5. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline

    PubMed Central

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903

  6. Free vibration of generally-laminated, shear-deformable, composite rectangular plates using a spline Rayleigh-Ritz method

    NASA Astrophysics Data System (ADS)

    Dawe, D. J.; Wang, S.

    A Rayleigh-Ritz method is presented for predicting the natural frequencies of flat rectangular laminates which can have arbitrary lay-up. The effects of through-thickness shear deformation are included in the analysis. The displacement field utilizes B-spline functions in what has been referred to in earlier work as a B(k,k-1)-spline Rayleigh-Ritz method and the approach is versatile in the specification of boundary conditions. The results of a number of applications are presented in the form of studies showing the convergence of frequency values with increase in the number of spline sections used. The analysis procedure is seen to have good convergence characteristics when dealing with laminates of thin and thick geometry.

  7. Characteristics method with cubic-spline interpolation for open channel flow computation

    NASA Astrophysics Data System (ADS)

    Tsai, Tung-Lin; Chiang, Shih-Wei; Yang, Jinn-Chuang

    2004-10-01

    In the framework of the specified-time-interval scheme, the accuracy of the characteristic method is greatly related to the form of the interpolation. The linear interpolation was commonly used to couple the characteristics method (LI method) in open channel flow computation. The LI method is easy to implement, but it leads to an inevitable smoothing of the solution. The characteristics method with the Hermite cubic interpolation (HP method, originally developed by Holly and Preissmann, 1977) was then proposed to largely reduce the error induced by the LI method. In this paper, the cubic-spline interpolation on the space line or on the time line is employed to integrate with characteristics method (CS method) for unsteady flow computation in open channel. Two hypothetical examples, including gradually and rapidly varied flows, are used to examine the applicability of the CS method as compared with the LI method, the HP method, and the analytical solutions. The simulated results show that the CS method is comparable to the HP method and more accurate than the LI method. Without tackling the additional equations for spatial or temporal derivatives, the CS method is easier to implement and more efficient than the HP method.

  8. A Tooth Surface Finishing Method with a New Tool of a Vitrified Cubic Boron Nitride Wheel for Involute Internal Spline

    NASA Astrophysics Data System (ADS)

    Mizuno, Sadao; Morita, Tetuya; Ariura, Yasutsune

    For highly accurate and highly efficient tooth surface finishing, a vitrified cubic boron nitride (CBN) wheel or an electro-deposited CBN wheel exhibits superior performance. Honing by an involute spline tooth meshing is effective due to the generating motion. However, a lack of wheel rigidity and an inadequate feed motion of the wheel tend to reduce the finishing performance. As a result, it is difficult to keep the tooth surface smooth. In this study, the finishing with a vitrified CBN wheel is carried out using a new honing tool. The finishing performance is compared with that obtained using an electro-deposited wheel, and the finishing is carried out by braking an internal spline axis. The influence of different feed methods is investigated on the roughness of the finished tooth surface. The finishing using a vitrified CBN wheel and braking an internal spline axis shows superior performance.

  9. Central-force decomposition of spline-based modified embedded atom method potential

    NASA Astrophysics Data System (ADS)

    Winczewski, S.; Dziedzic, J.; Rybicki, J.

    2016-10-01

    Central-force decompositions are fundamental to the calculation of stress fields in atomic systems by means of Hardy stress. We derive expressions for a central-force decomposition of the spline-based modified embedded atom method (s-MEAM) potential. The expressions are subsequently simplified to a form that can be readily used in molecular-dynamics simulations, enabling the calculation of the spatial distribution of stress in systems treated with this novel class of empirical potentials. We briefly discuss the properties of the obtained decomposition and highlight further computational techniques that can be expected to benefit from the results of this work. To demonstrate the practicability of the derived expressions, we apply them to calculate stress fields due to an edge dislocation in bcc Mo, comparing their predictions to those of linear elasticity theory.

  10. A boundary collocation meshfree method for the treatment of Poisson problems with complex morphologies

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Mai, Weijie; Liang, Bowen; Buchheit, Rudolph G.

    2015-01-01

    A new meshfree method based on a discrete transformation of Green's basis functions is introduced to simulate Poisson problems with complex morphologies. The proposed Green's Discrete Transformation Method (GDTM) uses source points that are located along a virtual boundary outside the problem domain to construct the basis functions needed to approximate the field. The optimal number of Green's functions source points and their relative distances with respect to the problem boundaries are evaluated to obtain the best approximation of the partition of unity condition. A discrete transformation technique together with the boundary point collocation method is employed to evaluate the unknown coefficients of the solution series via satisfying the problem boundary conditions. A comprehensive convergence study is presented to investigate the accuracy and convergence rate of the GDTM. We will also demonstrate the application of this meshfree method for simulating the conductive heat transfer in a heterogeneous materials system and the dissolved aluminum ions concentration in the electrolyte solution formed near a passive corrosion pit.

  11. Elastic wave propagation in bars of arbitrary cross section: a generalized Fourier expansion collocation method.

    PubMed

    Lesage, Jonathan C; Bond, Jill V; Sinclair, Anthony N

    2014-09-01

    The problem of elastic wave propagation in an infinite bar of arbitrary cross section is studied via a generalized version of the Fourier expansion collocation method. In the current formulation, the exact three dimensional solution to Navier's equation in cylindrical coordinates is used to obtain the boundary traction vector as a periodic, piecewise continuous/differentiable function of the angular coordinate. Traction free conditions are then met by setting the Fourier coefficients of the boundary traction vector to zero without approximating the bounding surface by multi-sided polygons as in the method presented by Nagaya. The method is derived for a general cross section with no axial planes of symmetry. Using the general formulation it is shown that the symmetric and asymmetric modes decouple for cross sections having one axial plane of symmetry. An efficient algorithm for computing dispersion curves based on the current method is presented and used to obtain the fundamental longitudinal and flexural wave speeds for a bar of elliptical cross section. The results are compared to those obtained by previous researchers using exact and approximate treatments.

  12. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows.

    PubMed

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential

  13. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows

    NASA Astrophysics Data System (ADS)

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential

  14. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows.

    PubMed

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential

  15. High-order numerical solutions using cubic splines

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.

  16. Membrane covered duct lining for high-frequency noise attenuation: prediction using a Chebyshev collocation method.

    PubMed

    Huang, Lixi

    2008-11-01

    A spectral method of Chebyshev collocation with domain decomposition is introduced for linear interaction between sound and structure in a duct lined with flexible walls backed by cavities with or without a porous material. The spectral convergence is validated by a one-dimensional problem with a closed-form analytical solution, and is then extended to the two-dimensional configuration and compared favorably against a previous method based on the Fourier-Galerkin procedure and a finite element modeling. The nonlocal, exact Dirichlet-to-Neumann boundary condition is embedded in the domain decomposition scheme without imposing extra computational burden. The scheme is applied to the problem of high-frequency sound absorption by duct lining, which is normally ineffective when the wavelength is comparable with or shorter than the duct height. When a tensioned membrane covers the lining, however, it scatters the incident plane wave into higher-order modes, which then penetrate the duct lining more easily and get dissipated. For the frequency range of f=0.3-3 studied here, f=0.5 being the first cut-on frequency of the central duct, the membrane cover is found to offer an additional 0.9 dB attenuation per unit axial distance equal to half of the duct height.

  17. Estimation of river pollution source using the space-time radial basis collocation method

    NASA Astrophysics Data System (ADS)

    Li, Zi; Mao, Xian-Zhong; Li, Tak Sing; Zhang, Shiyan

    2016-02-01

    River contaminant source identification problems can be formulated as an inverse model to estimate the missing source release history from the observed contaminant plume. In this study, the identification of pollution sources in rivers, where strong advection is dominant, is solved by the global space-time radial basis collocation method (RBCM). To search for the optimal shape parameter and scaling factor which strongly determine the accuracy of the RBCM method, a new cost function based on the residual errors of not only the observed data but also the specified governing equation, the initial and boundary conditions, was constructed for the k-fold cross-validation technique. The performance of three global radial basis functions, Hardy's multiquadric, inverse multiquadric and Gaussian, were also compared in the test cases. The numerical results illustrate that the new cost function is a good indicator to search for near-optimal solutions. Application to a real polluted river shows that the source release history is reasonably recovered, demonstrating that the RBCM with the k-fold cross-validation is a powerful tool for source identification problems in advection-dominated rivers.

  18. The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data.

    SciTech Connect

    Webster, Clayton; Tempone, Raul; Nobile, Fabio

    2007-12-01

    This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

  19. An Efficient Data-worth Analysis Framework via Probabilistic Collocation Method Based Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Xue, L.; Dai, C.; Zhang, D.; Guadagnini, A.

    2015-12-01

    It is critical to predict contaminant plume in an aquifer under uncertainty, which can help assess environmental risk and design rational management strategies. An accurate prediction of contaminant plume requires the collection of data to help characterize the system. Due to the limitation of financial resources, ones should estimate the expectative value of data collected from each optional monitoring scheme before carried out. Data-worth analysis is believed to be an effective approach to identify the value of the data in some problems, which quantifies the uncertainty reduction assuming that the plausible data has been collected. However, it is difficult to apply the data-worth analysis to a dynamic simulation of contaminant transportation model owning to its requirement of large number of inverse-modeling. In this study, a novel efficient data-worth analysis framework is proposed by developing the Probabilistic Collocation Method based Ensemble Kalman Filter (PCKF). The PCKF constructs polynomial chaos expansion surrogate model to replace the original complex numerical model. Consequently, the inverse modeling can perform on the proxy rather than the original model. An illustrative example, considering the dynamic change of the contaminant concentration, is employed to demonstrate the proposed approach. The Results reveal that schemes with different sampling frequencies, monitoring networks location, prior data content will have significant impact on the uncertainty reduction of the estimation of contaminant plume. Our proposition is validated to provide the reasonable value of data from various schemes.

  20. A self-consistent estimate for linear viscoelastic polycrystals with internal variables inferred from the collocation method

    NASA Astrophysics Data System (ADS)

    Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.

    2012-03-01

    The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.

  1. Using the Stochastic Collocation Method for the Uncertainty Quantification of Drug Concentration Due to Depot Shape Variability

    PubMed Central

    Preston, J. Samuel; Tasdizen, Tolga; Terry, Christi M.; Cheung, Alfred K.

    2010-01-01

    Numerical simulations entail modeling assumptions that impact outcomes. Therefore, characterizing, in a probabilistic sense, the relationship between the variability of model selection and the variability of outcomes is important. Under certain assumptions, the stochastic collocation method offers a computationally feasible alternative to traditional Monte Carlo approaches for assessing the impact of model and parameter variability. We propose a framework that combines component shape parameterization with the stochastic collocation method to study the effect of drug depot shape variability on the outcome of drug diffusion simulations in a porcine model. We use realistic geometries segmented from MR images and employ level-set techniques to create two alternative univariate shape parameterizations. We demonstrate that once the underlying stochastic process is characterized, quantification of the introduced variability is quite straightforward and provides an important step in the validation and verification process. PMID:19272865

  2. Examination of the Circle Spline Routine

    NASA Technical Reports Server (NTRS)

    Dolin, R. M.; Jaeger, D. L.

    1985-01-01

    The Circle Spline routine is currently being used for generating both two and three dimensional spline curves. It was developed for use in ESCHER, a mesh generating routine written to provide a computationally simple and efficient method for building meshes along curved surfaces. Circle Spline is a parametric linear blending spline. Because many computerized machining operations involve circular shapes, the Circle Spline is well suited for both the design and manufacturing processes and shows promise as an alternative to the spline methods currently supported by the Initial Graphics Specification (IGES).

  3. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  4. Ray-tracing method for creeping waves on arbitrarily shaped nonuniform rational B-splines surfaces.

    PubMed

    Chen, Xi; He, Si-Yuan; Yu, Ding-Feng; Yin, Hong-Cheng; Hu, Wei-Dong; Zhu, Guo-Qiang

    2013-04-01

    An accurate creeping ray-tracing algorithm is presented in this paper to determine the tracks of creeping waves (or creeping rays) on arbitrarily shaped free-form parametric surfaces [nonuniform rational B-splines (NURBS) surfaces]. The main challenge in calculating the surface diffracted fields on NURBS surfaces is due to the difficulty in determining the geodesic paths along which the creeping rays propagate. On one single parametric surface patch, the geodesic paths need to be computed by solving the geodesic equations numerically. Furthermore, realistic objects are generally modeled as the union of several connected NURBS patches. Due to the discontinuity of the parameter between the patches, it is more complicated to compute geodesic paths on several connected patches than on one single patch. Thus, a creeping ray-tracing algorithm is presented in this paper to compute the geodesic paths of creeping rays on the complex objects that are modeled as the combination of several NURBS surface patches. In the algorithm, the creeping ray tracing on each surface patch is performed by solving the geodesic equations with a Runge-Kutta method. When the creeping ray propagates from one patch to another, a transition method is developed to handle the transition of the creeping ray tracing across the border between the patches. This creeping ray-tracing algorithm can meet practical requirements because it can be applied to the objects with complex shapes. The algorithm can also extend the applicability of NURBS for electromagnetic and optical applications. The validity and usefulness of the algorithm can be verified from the numerical results.

  5. Algebraic grid adaptation method using non-uniform rational B-spline surface modeling

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, B. K.

    1992-01-01

    An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.

  6. Interchangeable spline reference guide

    SciTech Connect

    Dolin, R.M.

    1994-05-01

    The WX-Division Integrated Software Tools (WIST) Team evolved from two previous committees, First was the W78 Solid Modeling Pilot Project`s Spline Subcommittee, which later evolved into the Vv`X-Division Spline Committee. The mission of the WIST team is to investigate current CAE engineering processes relating to complex geometry and to develop methods for improving those processes. Specifically, the WIST team is developing technology that allows the Division to use multiple spline representations. We are also updating the contour system (CONSYS) data base to take full advantage of the Division`s expanding electronic engineering process. Both of these efforts involve developing interfaces to commercial CAE systems and writing new software. The WIST team is comprised of members from V;X-11, -12 and 13. This {open_quotes}cross-functional{close_quotes} approach to software development is somewhat new in the Division so an effort is being made to formalize our processes and assure quality at each phase of development. Chapter one represents a theory manual and is one phase of the formal process. The theory manual is followed by a software requirements document, specification document, software verification and validation documents. The purpose of this guide is to present the theory underlying the interchangeable spline technology and application. Verification and validation test results are also presented for proof of principal.

  7. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550

  8. Numerical solutions and error estimations for the space fractional diffusion equation with variable coefficients via Fibonacci collocation method.

    PubMed

    Bahşı, Ayşe Kurt; Yalçınbaş, Salih

    2016-01-01

    In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method. PMID:27610294

  9. A Consistent Projection Method for Multi-Fluid Flows with Continuous Surface Force on a Collocated Mesh

    NASA Astrophysics Data System (ADS)

    Ni, M. J.

    2010-03-01

    A comparison study of algorithm on a rectangular collocated mesh is conducted for variable density Navier-Stokes equations with continuous surface forces. The algorithms include the original projection method (AI-TI) with the surface force calculated only in the predictor steps, the named balanced-force projection method (AII-TIV) with the surface force and the pressure gradient calculated together, and a consistent projection method (AIII-TVII) developed in this paper. Detailed comparisons are also conducted among the techniques for calculation of the pressure gradient and surface force at a cell center. A consistent projection method updates the velocity at a cell center in a very difference way with the balanced-force projection formula. A conservative interpolation is used to update the velocity a cell center, which is further used to obtain the sum of the pressure gradient and the surface force.

  10. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  11. Mercury vapor air-surface exchange measured by collocated micrometeorological and enclosure methods - Part I: Data comparability and method characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2015-01-01

    Reliable quantification of air-biosphere exchange flux of elemental mercury vapor (Hg0) is crucial for understanding the global biogeochemical cycle of mercury. However, there has not been a standard analytical protocol for flux quantification, and little attention has been devoted to characterize the temporal variability and comparability of fluxes measured by different methods. In this study, we deployed a collocated set of micrometeorological (MM) and dynamic flux chamber (DFC) measurement systems to quantify Hg0 flux over bare soil and low standing crop in an agricultural field. The techniques include relaxed eddy accumulation (REA), modified Bowen ratio (MBR), aerodynamic gradient (AGM) as well as dynamic flux chambers of traditional (TDFC) and novel (NDFC) designs. The five systems and their measured fluxes were cross-examined with respect to magnitude, temporal trend and correlation with environmental variables. Fluxes measured by the MM and DFC methods showed distinct temporal trends. The former exhibited a highly dynamic temporal variability while the latter had much more gradual temporal features. The diurnal characteristics reflected the difference in the fundamental processes driving the measurements. The correlations between NDFC and TDFC fluxes and between MBR and AGM fluxes were significant (R>0.8, p<0.05), but the correlation between DFC and MM fluxes were from weak to moderate (R=0.1-0.5). Statistical analysis indicated that the median of turbulent fluxes estimated by the three independent MM techniques were not significantly different. Cumulative flux measured by TDFC is considerably lower (42% of AGM and 31% of MBR fluxes) while those measured by NDFC, AGM and MBR were similar (<10% difference). This suggests that incorporating an atmospheric turbulence property such as friction velocity for correcting the DFC-measured flux effectively bridged the gap between the Hg0 fluxes measured by enclosure and MM techniques. Cumulated flux measured by REA

  12. Schur-decomposition for 3D matrix equations and its application in solving radiative discrete ordinates equations discretized by Chebyshev collocation spectral method

    SciTech Connect

    Li Benwen Tian Shuai; Sun Yasong; Hu, Zhang-Mao

    2010-02-20

    The Schur-decomposition for three-dimensional matrix equations is developed and used to directly solve the radiative discrete ordinates equations which are discretized by Chebyshev collocation spectral method. Three methods, say, the spectral methods based on 2D and 3D matrix equation solvers individually, and the standard discrete ordinates method, are presented. The numerical results show the good accuracy of spectral method based on direct solvers. The CPU time cost comparisons against the resolutions between these three methods are made using MATLAB and FORTRAN 95 computer languages separately. The results show that the CPU time cost of Chebyshev collocation spectral method with 3D Schur-decomposition solver is the least, and almost only one thirtieth to one fiftieth CPU time is needed when using the spectral method with 3D Schur-decomposition solver compared with the standard discrete ordinates method.

  13. A pseudo-spectral collocation method applied to the problem of convective diffusive transport in fluids subject to unsteady residual accelerations

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan; Ouazzani, Jalil

    1989-01-01

    The problem of determining the sensitivity of Bridgman-Stockbarger directional solidification experiments to residual accelerations of the type associated with spacecraft in low earth orbit is analyzed numerically using a pseudo-spectral collocation method. The approach employs a novel iterative scheme combining the method of artificial compressibility and a generalized ADI method. The results emphasize the importance of the consideration of residual accelerations and careful selection of the operating conditions in order to take full advantages of the low gravity conditions.

  14. A nonclassical Radau collocation method for solving the Lane-Emden equations of the polytropic index 4.75 ≤ α < 5

    NASA Astrophysics Data System (ADS)

    Tirani, M. D.; Maleki, M.; Kajani, M. T.

    2014-11-01

    A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.

  15. Small and large deformation analysis with the p- and B-spline versions of the Finite Cell Method

    NASA Astrophysics Data System (ADS)

    Schillinger, Dominik; Ruess, Martin; Zander, Nils; Bazilevs, Yuri; Düster, Alexander; Rank, Ernst

    2012-10-01

    The Finite Cell Method (FCM) is an embedded domain method, which combines the fictitious domain approach with high-order finite elements, adaptive integration, and weak imposition of unfitted Dirichlet boundary conditions. For smooth problems, FCM has been shown to achieve exponential rates of convergence in energy norm, while its structured cell grid guarantees simple mesh generation irrespective of the geometric complexity involved. The present contribution first unhinges the FCM concept from a special high-order basis. Several benchmarks of linear elasticity and a complex proximal femur bone with inhomogeneous material demonstrate that for small deformation analysis, FCM works equally well with basis functions of the p-version of the finite element method or high-order B-splines. Turning to large deformation analysis, it is then illustrated that a straightforward geometrically nonlinear FCM formulation leads to the loss of uniqueness of the deformation map in the fictitious domain. Therefore, a modified FCM formulation is introduced, based on repeated deformation resetting, which assumes for the fictitious domain the deformation-free reference configuration after each Newton iteration. Numerical experiments show that this intervention allows for stable nonlinear FCM analysis, preserving the full range of advantages of linear elastic FCM, in particular exponential rates of convergence. Finally, the weak imposition of unfitted Dirichlet boundary conditions via the penalty method, the robustness of FCM under severe mesh distortion, and the large deformation analysis of a complex voxel-based metal foam are addressed.

  16. Application of Collocation Spectral Method for Irregular Convective-Radiative Fins with Temperature-Dependent Internal Heat Generation and Thermal Properties

    NASA Astrophysics Data System (ADS)

    Sun, Ya-Song; Ma, Jing; Li, Ben-Wen

    2015-11-01

    A collocation spectral method (CSM) is developed to solve the fin heat transfer in triangular, trapezoidal, exponential, concave parabolic, and convex geometries. In the thermal process of fin heat transfer, fin dissipates heat to environment by convection and radiation; internal heat generation, thermal conductivity, heat transfer coefficient, and surface emissivity are functions of temperature; ambient fluid temperature and radiative sink temperature are considered to be nonzero. The temperature in the fin is approximated by Chebyshev polynomials and spectral collocation points. Thus, the differential form of energy equation is transformed into the matrix form of algebraic equation. In order to test efficiency and accuracy of the developed method, five types of convective-radiative fins are examined. Results obtained by the CSM are assessed by comparing available results in references. These comparisons indicate that the CSM can be recommended as a good option to simulate and predict thermal performance of the convective-radiative fins.

  17. Approximation and error estimation in high dimensional space for stochastic collocation methods on arbitrary sparse samples

    SciTech Connect

    Archibald, Richard K; Deiterding, Ralf; Hauck, Cory D; Jakeman, John D; Xiu, Dongbin

    2012-01-01

    We have develop a fast method that can capture piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used for both approximation and error estimation of stochastic simulations where the computations can either be guided or come from a legacy database.

  18. Quantitative structure-activity relationship study on BTK inhibitors by modified multivariate adaptive regression spline and CoMSIA methods.

    PubMed

    Xu, A; Zhang, Y; Ran, T; Liu, H; Lu, S; Xu, J; Xiong, X; Jiang, Y; Lu, T; Chen, Y

    2015-01-01

    Bruton's tyrosine kinase (BTK) plays a crucial role in B-cell activation and development, and has emerged as a new molecular target for the treatment of autoimmune diseases and B-cell malignancies. In this study, two- and three-dimensional quantitative structure-activity relationship (2D and 3D-QSAR) analyses were performed on a series of pyridine and pyrimidine-based BTK inhibitors by means of genetic algorithm optimized multivariate adaptive regression spline (GA-MARS) and comparative molecular similarity index analysis (CoMSIA) methods. Here, we propose a modified MARS algorithm to develop 2D-QSAR models. The top ranked models showed satisfactory statistical results (2D-QSAR: Q(2) = 0.884, r(2) = 0.929, r(2)pred = 0.878; 3D-QSAR: q(2) = 0.616, r(2) = 0.987, r(2)pred = 0.905). Key descriptors selected by 2D-QSAR were in good agreement with the conclusions of 3D-QSAR, and the 3D-CoMSIA contour maps facilitated interpretation of the structure-activity relationship. A new molecular database was generated by molecular fragment replacement (MFR) and further evaluated with GA-MARS and CoMSIA prediction. Twenty-five pyridine and pyrimidine derivatives as novel potential BTK inhibitors were finally selected for further study. These results also demonstrated that our method can be a very efficient tool for the discovery of novel potent BTK inhibitors.

  19. Incidental Learning of Collocation

    ERIC Educational Resources Information Center

    Webb, Stuart; Newton, Jonathan; Chang, Anna

    2013-01-01

    This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.…

  20. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  1. A Parameter Estimation Method for Biological Systems modelled by ODE/DDE Models Using Spline Approximation and Differential Evolution Algorithm.

    PubMed

    Zhan, Choujun; Situ, Wuchao; Yeung, Lam Fat; Tsang, Peter Wai-Ming; Yang, Genke

    2014-01-01

    The inverse problem of identifying unknown parameters of known structure dynamical biological systems, which are modelled by ordinary differential equations or delay differential equations, from experimental data is treated in this paper. A two stage approach is adopted: first, combine spline theory and Nonlinear Programming (NLP), the parameter estimation problem is formulated as an optimization problem with only algebraic constraints; then, a new differential evolution (DE) algorithm is proposed to find a feasible solution. The approach is designed to handle problem of realistic size with noisy observation data. Three cases are studied to evaluate the performance of the proposed algorithm: two are based on benchmark models with priori-determined structure and parameters; the other one is a particular biological system with unknown model structure. In the last case, only a set of observation data available and in this case a nominal model is adopted for the identification. All the test systems were successfully identified by using a reasonable amount of experimental data within an acceptable computation time. Experimental evaluation reveals that the proposed method is capable of fast estimation on the unknown parameters with good precision.

  2. Conforming Chebyshev spectral collocation methods for the solution of laminar flow in a constricted channel

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1990-01-01

    The numerical simulation of steady planar two-dimensional, laminar flow of an incompressible fluid through an abruptly contracting channel using spectral domain decomposition methods is described. The key features of the method are the decomposition of the flow region into a number of rectangular subregions and spectral approximations which are pointwise C(1) continuous across subregion interfaces. Spectral approximations to the solution are obtained for Reynolds numbers in the range 0 to 500. The size of the salient corner vortex decreases as the Reynolds number increases from 0 to around 45. As the Reynolds number is increased further the vortex grows slowly. A vortex is detected downstream of the contraction at a Reynolds number of around 175 that continues to grow as the Reynolds number is increased further.

  3. A Locally Modal B-Spline Based Full-Vector Finite-Element Method with PML for Nonlinear and Lossy Plasmonic Waveguide

    NASA Astrophysics Data System (ADS)

    Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan

    2016-09-01

    In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.

  4. Fujiwhara interaction of tropical cyclone scale vortices using a weighted residual collocation method

    NASA Astrophysics Data System (ADS)

    Walsh, Raymond P.; Alam, Jahrul M.

    2016-09-01

    The fundamental interaction between tropical cyclones was investigated through a series of water tank experiements by Fujiwhara [20, 21, 22]. However, a complete understanding of tropical cyclones remains an open research challenge although there have been numerous investigations through measurments with aircrafts/satellites, as well as with numerical simulations. This article presents a computational model for simulating the interaction between cyclones. The proposed numerical method is presented briefly, where the time integration is performed by projecting the discrete system onto a Krylov subspace. The method filters the large scale fluid dynamics using a multiresolution approximation, and the unresolved dynamics is modeled with a Smagorinsky type subgrid scale parameterization scheme. Numerical experiments with Fujiwhara interactions are considered to verify modeling accuracy. An excellent agreement between the present simulation and a reference simulation at Re = 5000 has been demonstrated. At Re = 37440, the kinetic energy of cyclones is seen consolidated into larger scales with concurrent enstrophy cascade, suggesting a steady increase of energy containing scales, a phenomena that is typical in two-dimensional turbulence theory. The primary results of this article suggest a novel avenue for addressing some of the computational challenges of mesoscale atmospheric circulations.

  5. Gear Spline Coupling Program

    SciTech Connect

    Guo, Yi; Errichello, Robert

    2013-08-29

    An analytical model is developed to evaluate the design of a spline coupling. For a given torque and shaft misalignment, the model calculates the number of teeth in contact, tooth loads, stiffnesses, stresses, and safety factors. The analytic model provides essential spline coupling design and modeling information and could be easily integrated into gearbox design and simulation tools.

  6. Computer program for fitting low-order polynomial splines by method of least squares

    NASA Technical Reports Server (NTRS)

    Smith, P. J.

    1972-01-01

    FITLOS is computer program which implements new curve fitting technique. Main program reads input data, calls appropriate subroutines for curve fitting, calculates statistical analysis, and writes output data. Method was devised as result of need to suppress noise in calibration of multiplier phototube capacitors.

  7. Detection of defects on apple using B-spline lighting correction method

    NASA Astrophysics Data System (ADS)

    Li, Jiangbo; Huang, Wenqian; Guo, Zhiming

    To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.

  8. An efficient, high-order probabilistic collocation method on sparse grids for three-dimensional flow and solute transport in randomly heterogeneous porous media

    SciTech Connect

    Lin, Guang; Tartakovsky, Alexandre M.

    2009-05-01

    In this study, a probabilistic collocation method (PCM) on sparse grids was used to solve stochastic equations describing flow and transport in three-dimensional in saturated, randomly heterogeneous porous media. Karhunen-Lo\\`{e}ve (KL) decomposition was used to represent the three-dimensional log hydraulic conductivity $Y=\\ln K_s$. The hydraulic head $h$ and average pore-velocity $\\bf v$ were obtained by solving the three-dimensional continuity equation coupled with Darcy's law with random hydraulic conductivity field. The concentration was computed by solving a three-dimensional stochastic advection-dispersion equation with stochastic average pore-velocity $\\bf v$ computed from Darcy's law. PCM is an extension of the generalized polynomial chaos (gPC) that couples gPC with probabilistic collocation. By using the sparse grid points, PCM can handle a random process with large number of random dimensions, with relatively lower computational cost, compared to full tensor products. Monte Carlo (MC) simulations have also been conducted to verify accuracy of the PCM. By comparing the MC and PCM results for mean and standard deviation of concentration, it is evident that the PCM approach is computational more efficient than Monte Carlo simulations. Unlike the conventional moment-equation approach, there is no limitation on the amplitude of random perturbation in PCM. Furthermore, PCM on sparse grids can efficiently simulate solute transport in randomly heterogeneous porous media with large variances.

  9. Uncertainty Quantification in Dynamic Simulations of Large-scale Power System Models using the High-Order Probabilistic Collocation Method on Sparse Grids

    SciTech Connect

    Lin, Guang; Elizondo, Marcelo A.; Lu, Shuai; Wan, Xiaoliang

    2014-01-01

    This paper proposes a probabilistic collocation method (PCM) to quantify the uncertainties with dynamic simulations in power systems. The appraoch was tested on a single-machine-infinite-bus system and the over 15,000 -bus Western Electricity Coordinating Council (WECC) system. Comparing to classic Monte-Carlo (MC) method, the proposed PCM applies the Smolyak algorithm to reduce the number of simulations that have to be performed. Therefore, the computational cost can be greatly reduced using PCM. The algorithm and procedures are described in the paper. Comparison was made with MC method on the single machine as well as the WECC system. The simulation results shows that using PCM only a small number of sparse grid points need to be sampled even when dealing with systems with a relatively large number of uncertain parameters. PCM is, therefore, computationally more efficient than MC method.

  10. RATIONAL SPLINE SUBROUTINES

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1994-01-01

    Scientific data often contains random errors that make plotting and curve-fitting difficult. The Rational-Spline Approximation with Automatic Tension Adjustment algorithm lead to a flexible, smooth representation of experimental data. The user sets the conditions for each consecutive pair of knots:(knots are user-defined divisions in the data set) to apply no tension; to apply fixed tension; or to determine tension with a tension adjustment algorithm. The user also selects the number of knots, the knot abscissas, and the allowed maximum deviations from line segments. The selection of these quantities depends on the actual data and on the requirements of a particular application. This program differs from the usual spline under tension in that it allows the user to specify different tension values between each adjacent pair of knots rather than a constant tension over the entire data range. The subroutines use an automatic adjustment scheme that varies the tension parameter for each interval until the maximum deviation of the spline from the line joining the knots is less than or equal to a user-specified amount. This procedure frees the user from the drudgery of adjusting individual tension parameters while still giving control over the local behavior of the spline The Rational Spline program was written completely in FORTRAN for implementation on a CYBER 850 operating under NOS. It has a central memory requirement of approximately 1500 words. The program was released in 1988.

  11. Lagrange interpolation and modified cubic B-spline differential quadrature methods for solving hyperbolic partial differential equations with Dirichlet and Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Jiwari, Ram

    2015-08-01

    In this article, the author proposed two differential quadrature methods to find the approximate solution of one and two dimensional hyperbolic partial differential equations with Dirichlet and Neumann's boundary conditions. The methods are based on Lagrange interpolation and modified cubic B-splines respectively. The proposed methods reduced the hyperbolic problem into a system of second order ordinary differential equations in time variable. Then, the obtained system is changed into a system of first order ordinary differential equations and finally, SSP-RK3 scheme is used to solve the obtained system. The well known hyperbolic equations such as telegraph, Klein-Gordon, sine-Gordon, Dissipative non-linear wave, and Vander Pol type non-linear wave equations are solved to check the accuracy and efficiency of the proposed methods. The numerical results are shown in L∞ , RMS andL2 errors form.

  12. A Study on the Phenomenon of Collocations: Methodology of Teaching English and German Collocations to Russian Students

    ERIC Educational Resources Information Center

    Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.

    2016-01-01

    Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…

  13. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    SciTech Connect

    M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  14. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  15. Air-surface exchange of Hg0 measured by collocated micrometeorological and enclosure methods - Part 1: Data comparability and method characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2014-09-01

    Reliable quantification of air-biosphere exchange flux of elemental mercury vapor (Hg0) is crucial for understanding global biogeochemical cycle of mercury. However, there has not been a standard analytical protocol for flux quantification, and little attention has been devoted to characterize the temporal variability and comparability of fluxes measured by different methods. In this study, we deployed a collocated set of micro-meteorological (MM) and enclosure measurement systems to quantify Hg0 flux over bare soil and low standing crop in an agricultural field. The techniques include relaxed eddy accumulation (REA), modified Bowen-ratio (MBR), aerodynamic gradient (AGM) as well as dynamic flux chambers of traditional (TDFC) and novel (NDFC) designs. The five systems and their measured fluxes were cross-examined with respect to magnitude, temporal trend and sensitivity to environmental variables. Fluxes measured by the MM and DFC methods showed distinct temporal trends. The former exhibited a highly dynamic temporal variability while the latter had much gradual temporal features. The diurnal characteristics reflected the difference in the fundamental processes driving the measurements. The correlations between NDFC and TDFC fluxes and between MBR and AGM fluxes were significant (R > 0.8, p < 0.05), but the correlation between DFC and MM instantaneous fluxes were from weak to moderate (R = 0.1-0.5). Statistical analysis indicated that the median of turbulent fluxes estimated by the three independent MM-techniques were not significantly different. Cumulative flux measured by TDFC is considerably lower (42% of AGM and 31% of MBR fluxes) while those measured by NDFC, AGM and MBR were similar (< 10% difference). This implicates that the NDFC technique, which accounts for internal friction velocity, effectively bridged the gap in measured Hg0 flux compared to MM techniques. Cumulated flux measured by REA was ~60% higher than the gradient-based fluxes. Environmental

  16. Smoothing spline ANOVA decomposition of arbitrary splines: an application to eye movements in reading.

    PubMed

    Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias

    2015-01-01

    The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.

  17. Accuracy of a Mitral Valve Segmentation Method Using J-Splines for Real-Time 3D Echocardiography Data

    PubMed Central

    Siefert, Andrew W.; Icenogle, David A.; Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Rossignac, Jarek; Lerakis, Stamatios; Yoganathan, Ajit P.

    2013-01-01

    Patient-specific models of the heart’s mitral valve (MV) exhibit potential for surgical planning. While advances in 3D echocardiography (3DE) have provided adequate resolution to extract MV leaflet geometry, no study has quantitatively assessed the accuracy of their modeled leaflets versus a ground-truth standard for temporal frames beyond systolic closure or for differing valvular dysfunctions. The accuracy of a 3DE-based segmentation methodology based on J-splines was assessed for porcine MVs with known 4D leaflet coordinates within a pulsatile simulator during closure, peak closure, and opening for a control, prolapsed, and billowing MV model. For all time points, the mean distance error between the segmented models and ground-truth data were 0.40±0.32 mm, 0.52±0.51 mm, and 0.74±0.69 mm for the control, flail, and billowing models. For all models and temporal frames, 95% of the distance errors were below 1.64 mm. When applied to a patient data set, segmentation was able to confirm a regurgitant orifice and post-operative improvements in coaptation. This study provides an experimental platform for assessing the accuracy of an MV segmentation methodology at phases beyond systolic closure and for differing MV dysfunctions. Results demonstrate the accuracy of a MV segmentation methodology for the development of future surgical planning tools. PMID:23460042

  18. Efficient Temperature-Dependent Green's Function Methods for Realistic Systems: Using Cubic Spline Interpolation to Approximate Matsubara Green's Functions.

    PubMed

    Kananenka, Alexei A; Welden, Alicia Rae; Lan, Tran Nguyen; Gull, Emanuel; Zgid, Dominika

    2016-05-10

    The popular, stable, robust, and computationally inexpensive cubic spline interpolation algorithm is adopted and used for finite temperature Green's function calculations of realistic systems. We demonstrate that with appropriate modifications the temperature dependence can be preserved while the Green's function grid size can be reduced by about 2 orders of magnitude by replacing the standard Matsubara frequency grid with a sparser grid and a set of interpolation coefficients. We benchmarked the accuracy of our algorithm as a function of a single parameter sensitive to the shape of the Green's function. Through numerous examples, we confirmed that our algorithm can be utilized in a systematically improvable, controlled, and black-box manner and highly accurate one- and two-body energies and one-particle density matrices can be obtained using only around 5% of the original grid points. Additionally, we established that to improve accuracy by an order of magnitude, the number of grid points needs to be doubled, whereas for the Matsubara frequency grid, an order of magnitude more grid points must be used. This suggests that realistic calculations with large basis sets that were previously out of reach because they required enormous grid sizes may now become feasible. PMID:27049642

  19. SCMCRYS: predicting protein crystallization using an ensemble scoring card method with estimating propensity scores of P-collocated amino acid pairs.

    PubMed

    Charoenkwan, Phasit; Shoombuatong, Watshara; Lee, Hua-Chin; Chaijaruwanich, Jeerayut; Huang, Hui-Ling; Ho, Shinn-Ying

    2013-01-01

    Existing methods for predicting protein crystallization obtain high accuracy using various types of complemented features and complex ensemble classifiers, such as support vector machine (SVM) and Random Forest classifiers. It is desirable to develop a simple and easily interpretable prediction method with informative sequence features to provide insights into protein crystallization. This study proposes an ensemble method, SCMCRYS, to predict protein crystallization, for which each classifier is built by using a scoring card method (SCM) with estimating propensity scores of p-collocated amino acid (AA) pairs (p=0 for a dipeptide). The SCM classifier determines the crystallization of a sequence according to a weighted-sum score. The weights are the composition of the p-collocated AA pairs, and the propensity scores of these AA pairs are estimated using a statistic with optimization approach. SCMCRYS predicts the crystallization using a simple voting method from a number of SCM classifiers. The experimental results show that the single SCM classifier utilizing dipeptide composition with accuracy of 73.90% is comparable to the best previously-developed SVM-based classifier, SVM_POLY (74.6%), and our proposed SVM-based classifier utilizing the same dipeptide composition (77.55%). The SCMCRYS method with accuracy of 76.1% is comparable to the state-of-the-art ensemble methods PPCpred (76.8%) and RFCRYS (80.0%), which used the SVM and Random Forest classifiers, respectively. This study also investigates mutagenesis analysis based on SCM and the result reveals the hypothesis that the mutagenesis of surface residues Ala and Cys has large and small probabilities of enhancing protein crystallizability considering the estimated scores of crystallizability and solubility, melting point, molecular weight and conformational entropy of amino acids in a generalized condition. The propensity scores of amino acids and dipeptides for estimating the protein crystallizability can aid

  20. Accelerating the performance of a novel meshless method based on collocation with radial basis functions by employing a graphical processing unit as a parallel coprocessor

    NASA Astrophysics Data System (ADS)

    Owusu-Banson, Derek

    In recent times, a variety of industries, applications and numerical methods including the meshless method have enjoyed a great deal of success by utilizing the graphical processing unit (GPU) as a parallel coprocessor. These benefits often include performance improvement over the previous implementations. Furthermore, applications running on graphics processors enjoy superior performance per dollar and performance per watt than implementations built exclusively on traditional central processing technologies. The GPU was originally designed for graphics acceleration but the modern GPU, known as the General Purpose Graphical Processing Unit (GPGPU) can be used for scientific and engineering calculations. The GPGPU consists of massively parallel array of integer and floating point processors. There are typically hundreds of processors per graphics card with dedicated high-speed memory. This work describes an application written by the author, titled GaussianRBF to show the implementation and results of a novel meshless method that in-cooperates the collocation of the Gaussian radial basis function by utilizing the GPU as a parallel co-processor. Key phases of the proposed meshless method have been executed on the GPU using the NVIDIA CUDA software development kit. Especially, the matrix fill and solution phases have been carried out on the GPU, along with some post processing. This approach resulted in a decreased processing time compared to similar algorithm implemented on the CPU while maintaining the same accuracy.

  1. General spline filters for discontinuous Galerkin solutions

    PubMed Central

    Peters, Jörg

    2015-01-01

    The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots. PMID:26594090

  2. Spline screw payload fastening system

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the

  3. Spline screw payload fastening system

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1992-09-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the

  4. Spline screw payload fastening system

    NASA Astrophysics Data System (ADS)

    Vranish, John M.

    1993-09-01

    A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the

  5. Clinical Trials: Spline Modeling is Wonderful for Nonlinear Effects.

    PubMed

    Cleophas, Ton J

    2016-01-01

    Traditionally, nonlinear relationships like the smooth shapes of airplanes, boats, and motor cars were constructed from scale models using stretched thin wooden strips, otherwise called splines. In the past decades, mechanical spline methods have been replaced with their mathematical counterparts. The objective of the study was to study whether spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial and also to study whether it can detect patterns in a trial that are relevant but go unobserved with simpler regression models. A clinical trial assessing the effect of quantity of care on quality of care was used as an example. Spline curves consistent of 4 or 5 cubic functions were applied. SPSS statistical software was used for analysis. The spline curves of our data outperformed the traditional curves because (1) unlike the traditional curves, they did not miss the top quality of care given in either subgroup, (2) unlike the traditional curves, they, rightly, did not produce sinusoidal patterns, and (3) unlike the traditional curves, they provided a virtually 100% match of the original values. We conclude that (1) spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial; (2) spline modeling can detect patterns in a trial that are relevant but may go unobserved with simpler regression models; (3) in clinical research, spline modeling has great potential given the presence of many nonlinear effects in this field of research and given its sophisticated mathematical refinement to fit any nonlinear effect in the mostly accurate way; and (4) spline modeling should enable to improve making predictions from clinical research for the benefit of health decisions and health care. We hope that this brief introduction to spline modeling will stimulate clinical investigators to start using this wonderful method.

  6. Mathematical research on spline functions

    NASA Technical Reports Server (NTRS)

    Horner, J. M.

    1973-01-01

    One approach in spline functions is to grossly estimate the integrand in J and exactly solve the resulting problem. If the integrand in J is approximated by Y" squared, the resulting problem lends itself to exact solution, the familiar cubic spline. Another approach is to investigate various approximations to the integrand in J and attempt to solve the resulting problems. The results are described.

  7. Single authentication: exposing weighted splining artifacts

    NASA Astrophysics Data System (ADS)

    Ciptasari, Rimba W.

    2016-05-01

    A common form of manipulation is to combine parts of the image fragment into another different image either to remove or blend the objects. Inspired by this situation, we propose a single authentication technique for detecting traces of weighted average splining technique. In this paper, we assume that image composite could be created by joining two images so that the edge between them is imperceptible. The weighted average technique is constructed from overlapped images so that it is possible to compute the gray level value of points within a transition zone. This approach works on the assumption that although splining process leaves the transition zone smoothly. They may, nevertheless, alter the underlying statistics of an image. In other words, it introduces specific correlation into the image. The proposed idea dealing with identifying these correlations is to generate an original model of both weighting function, left and right functions, as references to their synthetic models. The overall process of the authentication is divided into two main stages, which are pixel predictive coding and weighting function estimation. In the former stage, the set of intensity pairs {Il,Ir} is computed by exploiting pixel extrapolation technique. The least-squares estimation method is then employed to yield the weighted coefficients. We show the efficacy of the proposed scheme on revealing the splining artifacts. We believe that this is the first work that exposes the image splining artifact as evidence of digital tampering.

  8. Theory, computation, and application of exponential splines

    NASA Technical Reports Server (NTRS)

    Mccartin, B. J.

    1981-01-01

    A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.

  9. Mercury vapor air-surface exchange measured by collocated micrometeorological and enclosure methods - Part II: Bias and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2015-05-01

    Dynamic flux chambers (DFCs) and micrometeorological (MM) methods are extensively deployed for gauging air-surface Hg0 gas exchange. However, a systematic evaluation of the precision of the contemporary Hg0 flux quantification methods is not available. In this study, the uncertainty in Hg0 flux measured by the relaxed eddy accumulation (REA) method, the aerodynamic gradient method (AGM), the modified Bowen ratio (MBR) method, as well as DFC of traditional (TDFC) and novel (NDFC) designs, are assessed using a robust data set from two field intercomparison campaigns. The absolute precision in Hg0 concentration difference (ΔC) measurements is estimated at 0.064 ng m-3 for the gradient-based MBR and AGM systems. For the REA system, the parameter is Hg0 concentration (C) dependent at 0.069 + 0.022C. During the campaigns, 57 and 62 % of the individual vertical gradient measurements are found to be significantly different from 0, while for the REA technique, the percentage of significant observations is lower. For the chambers, non-significant fluxes are confined to a few night-time periods with varying ambient Hg0 concentrations. Relative bias for DFC-derived fluxes is estimated to be ~ ±10, and ~ 85% of the flux bias is within ±2 ng m-2 h-1 in absolute terms. The DFC flux bias follows a diurnal cycle, which is largely affected by the forced temperature and irradiation bias in the chambers. Due to contrasting prevailing micrometeorological conditions, the relative uncertainty (median) in turbulent exchange parameters differs by nearly a factor of 2 between the campaigns, while that in ΔC measurement is fairly consistent. The estimated flux uncertainties for the triad of MM techniques are 16-27, 12-23 and 19-31% (interquartile range) for the AGM, MBR and REA methods, respectively. This study indicates that flux-gradient-based techniques (MBR and AGM) are preferable to REA in quantifying Hg0 flux over ecosystems with low vegetation height. A limitation of all Hg0 flux

  10. Mercury vapor air-surface exchange measured by collocated micrometeorological and enclosure methods - Part II: Bias and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.

    2015-02-01

    Dynamic flux chambers (DFCs) and micrometeorological (MM) methods are extensively deployed for gauging air-surface Hg0 gas exchange. However, a systematic evaluation of the precision of the contemporary Hg0 flux quantification methods is not available. In this study, the uncertainty in Hg0 flux measured by relaxed eddy accumulation (REA) method, aerodynamic gradient method (AGM), modified Bowen-ratio (MBR) method, as well as DFC of traditional (TDFC) and novel (NDFC) designs is assessed using a robust data-set from two field intercomparison campaigns. The absolute precision in Hg0 concentration difference (Δ C) measurements is estimated at 0.064 ng m-3 for the gradient-based MBR and AGM system. For the REA system, the parameter is Hg0 concentration (C) dependent at 0.069+0.022C. 57 and 62% of the individual vertical gradient measurements were found to be significantly different from zero during the campaigns, while for the REA-technique the percentage of significant observations was lower. For the chambers, non-significant fluxes are confined to a few nighttime periods with varying ambient Hg0 concentration. Relative bias for DFC-derived fluxes is estimated to be ~ ±10%, and ~ 85% of the flux bias are within ±2 ng m-2 h-1 in absolute term. The DFC flux bias follows a diurnal cycle, which is largely dictated by temperature controls on the enclosed volume. Due to contrasting prevailing micrometeorological conditions, the relative uncertainty (median) in turbulent exchange parameters differs by nearly a factor of two between the campaigns, while that in Δ C measurements is fairly stable. The estimated flux uncertainties for the triad of MM-techniques are 16-27, 12-23 and 19-31% (interquartile range) for the AGM, MBR and REA method, respectively. This study indicates that flux-gradient based techniques (MBR and AGM) are preferable to REA in quantifying Hg0 flux over ecosystems with low vegetation height. A limitation of all Hg0 flux measurement systems investigated

  11. Justification of the collocation method for the integral equation for a mixed boundary value problem for the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Khalilov, E. H.

    2016-07-01

    The surface integral equation for a spatial mixed boundary value problem for the Helmholtz equation is considered. At a set of chosen points, the equation is replaced with a system of algebraic equations, and the existence and uniqueness of the solution of this system is established. The convergence of the solutions of this system to the exact solution of the integral equation is proven, and the convergence rate of the method is determined.

  12. Analysis of crustal structure of Venus utilizing residual Line-of-Sight (LOS) gravity acceleration and surface topography data. A trial of global modeling of Venus gravity field using harmonic spline method

    NASA Technical Reports Server (NTRS)

    Fang, Ming; Bowin, Carl

    1992-01-01

    To construct Venus' gravity disturbance field (or gravity anomaly) with the spacecraft-observer line of site (LOS) acceleration perturbation data, both a global and a local approach can be used. The global approach, e.g., spherical harmonic coefficients, and the local approach, e.g., the integral operator method, based on geodetic techniques are generally not the same, so that they must be used separately for mapping long wavelength features and short wavelength features. Harmonic spline, as an interpolation and extrapolation technique, is intrinsically flexible to both global and local mapping of a potential field. Theoretically, it preserves the information of the potential field up to the bound by sampling theorem regardless of whether it is global or local mapping, and is never bothered with truncation errors. The improvement of harmonic spline methodology for global mapping is reported. New basis functions, a singular value decomposition (SVD) based modification to Parker & Shure's numerical procedure, and preliminary results are presented.

  13. Resonant frequency analysis of a Lamé-mode resonator on a quartz plate by the finite-difference time-domain method using the staggered grid with the collocated grid points of velocities

    NASA Astrophysics Data System (ADS)

    Yasui, Takashi; Hasegawa, Koji; Hirayama, Koichi

    2016-07-01

    The finite-difference time-domain (FD-TD) method using a staggered grid with the collocated grid points of velocities (SGCV) was formulated for elastic waves propagating in anisotropic solids and for a rectangular SGCV. Resonant frequency analysis of Lamé-mode resonators on a quartz plate was carried out to confirm the accuracy and validity of the proposed method. The resonant frequencies for the fundamental and higher-order Lamé-modes calculated by the proposed method agreed very well with their theoretical values.

  14. Parameter Choices for Approximation by Harmonic Splines

    NASA Astrophysics Data System (ADS)

    Gutting, Martin

    2016-04-01

    The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.

  15. Penalized spline estimation for functional coefficient regression models

    PubMed Central

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z.

    2011-01-01

    The functional coefficient regression models assume that the regression coefficients vary with some “threshold” variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called “curse of dimensionality” in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application. PMID:21516260

  16. Learning Collocations: Do the Number of Collocates, Position of the Node Word, and Synonymy Affect Learning?

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2011-01-01

    This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…

  17. Evaluation of Least-Squares Collocation and the Reduced Point Mass method using the International Association of Geodesy, Joint Study Group 0.3 test data.

    NASA Astrophysics Data System (ADS)

    Tscherning, Carl Christian; Herceg, Matija

    2014-05-01

    The methods of Least-Squares Collocation (LSC) and the Reduced Point Mass method (RPM) both uses radial basis-functions for the representation of the anomalous gravity potential (T). LSC uses as many base-functions as the number of observations, while the RPM method uses as many as deemed necessary. Both methods have been evaluated and for some tests compared in the two areas (Central Europe and South-East Pacific). For both areas test data had been generated using EGM2008. As observational data (a) ground gravity disturbances, (b) airborne gravity disturbances, (c) GOCE like Second order radial derivatives and (d) GRACE along-track potential differences were available. The use of these data for the computation of values of (e) T in a grid was the target of the evaluation and comparison investigation. Due to the fact that T in principle can only be computed using global data, the remove-restore procedure was used, with EGM2008 subtracted (and later added to T) up to degree 240 using dataset (a) and (b) and up to degree 36 for datasets (c) and (d). The estimated coefficient error was accounted for when using LSC and in the calculation of error-estimates. The main result is that T was estimated with an error (computed minus control data, (e) from which EGM2008 to degree 240 or 36 had been subtracted ) as found in the table (LSC used): Area Europe Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.0 0.0 -0.1 -0.1 -0.3 -1.8 Standard deviation4.1 0.8 2.7 32.6 6.0 19.2 Max. difference 19.9 10.4 16.9 69.9 31.3 47.0 Min.difference -16.2 -3.7 -15.5 -92.1 -27.8 -65.5 Area Pacific Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.1 -0.1 -0.1 4.6 -0.2 0.2 Standard deviation4.8 0.2 1.9 49.1 6.7 18.6 Max.difference 22.2 1.8 13.4 115.5 26.9 26.5 Min.difference -28.7 -3.1 -15.7 -106.4 -33.6 22.1 The result using RPM with data-sets (a), (b), (c) gave comparable results. The use of (d) with the RPM method is being implemented. Tests were also done computing dataset (a) from

  18. Collocations: A Neglected Variable in EFL.

    ERIC Educational Resources Information Center

    Farghal, Mohammed; Obiedat, Hussein

    1995-01-01

    Addresses the issue of collocations as an important and neglected variable in English-as-a-Foreign-Language classes. Two questionnaires, in English and Arabic, involving common collocations relating to food, color, and weather were administered to English majors and English language teachers. Results show both groups deficient in collocations. (36…

  19. Interlanguage Development and Collocational Clash

    ERIC Educational Resources Information Center

    Shahheidaripour, Gholamabbass

    2000-01-01

    Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…

  20. Mr. Stockdale's Dictionary of Collocations.

    ERIC Educational Resources Information Center

    Stockdale, Joseph Gagen, III

    This dictionary of collocations was compiled by an English-as-a-Second-Language (ESL) teacher in Saudi Arabia who teaches adult, native speakers of Arabic. The dictionary is practical in teaching English because it helps to focus on everyday events and situations. The dictionary works as follows: the teacher looks up a word, such as "talk"; next…

  1. Multi-quadric collocation model of horizontal crustal movement

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Zeng, Anmin; Ming, Feng; Jing, Yifan

    2016-05-01

    To establish the horizontal crustal movement velocity field of the Chinese mainland, a Hardy multi-quadric fitting model and collocation are usually used. However, the kernel function, nodes, and smoothing factor are difficult to determine in the Hardy function interpolation. Furthermore, the covariance function of the stochastic signal must be carefully constructed in the collocation model, which is not trivial. In this paper, a new combined estimation method for establishing the velocity field, based on collocation and multi-quadric equation interpolation, is presented. The crustal movement estimation simultaneously takes into consideration an Euler vector as the crustal movement trend and the local distortions as the stochastic signals, and a kernel function of the multi-quadric fitting model substitutes for the covariance function of collocation. The velocities of a set of 1070 reference stations were obtained from the Crustal Movement Observation Network of China, and the corresponding velocity field was established using the new combined estimation method. A total of 85 reference stations were used as checkpoints, and the precision in the north and east component was 1.25 and 0.80 mm yr-1, respectively. The result obtained by the new method corresponds with the collocation method and multi-quadric interpolation without requiring the covariance equation for the signals.

  2. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    SciTech Connect

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  3. Triple collocation: beyond three estimates and separation of structural/non-structural errors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the “multiple” collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is s...

  4. Recent advances in (soil moisture) triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....

  5. A package for the ab-initio calculation of one- and two-photon cross sections of two-electron atoms, using a CI B-splines method

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, L. A. A.

    2003-02-01

    A package is presented for the fully ab-initio calculation of one- and two-photon ionization cross sections for two-electron atomic systems (H -, He, Mg, Ca, …) under strong laser fields, within lowest-order perturbation theory (LOPT) and in the dipole approximation. The atomic structure is obtained through configuration interaction (CI) of antisymmetrized two-electron states expanded in a B-spline finite basis. The formulation of the theory and the relevant codes presented here represent the accumulation of work over the last ten years [1-11,13-15]. Extensions to more than two-photon ionization is straightforward. Calculation is possible for both the length and velocity form of the laser-atom interaction operator. The package is mainly, written in standard FORTRAN language and uses the publicly available libraries SLATEC, LAPACK and BLAS.

  6. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  7. Investigating ESL Learners' Lexical Collocations: The Acquisition of Verb + Noun Collocations by Japanese Learners of English

    ERIC Educational Resources Information Center

    Miyakoshi, Tomoko

    2009-01-01

    Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an…

  8. Spline screw multiple rotations mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1993-01-01

    A system for coupling two bodies together and for transmitting torque from one body to another with mechanical timing and sequencing is reported. The mechanical timing and sequencing is handled so that the following criteria are met: (1) the bodies are handled in a safe manner and nothing floats loose in space, (2) electrical connectors are engaged as long as possible so that the internal processes can be monitored throughout by sensors, and (3) electrical and mechanical power and signals are coupled. The first body has a splined driver for providing the input torque. The second body has a threaded drive member capable of rotation and limited translation. The embedded drive member will mate with and fasten to the splined driver. The second body has an embedded bevel gear member capable of rotation and limited translation. This bevel gear member is coaxial with the threaded drive member. A compression spring provides a preload on the rotating threaded member, and a thrust bearing is used for limiting the translation of the bevel gear member so that when the bevel gear member reaches the upward limit of its translation the two bodies are fully coupled and the bevel gear member then rotates due to the input torque transmitted from the splined driver through the threaded drive member to the bevel gear member. An output bevel gear with an attached output drive shaft is embedded in the second body and meshes with the threaded rotating bevel gear member to transmit the input torque to the output drive shaft.

  9. Evaluation of assumptions in soil moisture triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  10. A cubic spline approximation for problems in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  11. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  12. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  13. Covariance modeling in geodetic applications of collocation

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko

    2014-05-01

    Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.

  14. Collocation and Technicality in EAP Engineering

    ERIC Educational Resources Information Center

    Ward, Jeremy

    2007-01-01

    This article explores how collocation relates to lexical technicality, and how the relationship can be exploited for teaching EAP to second-year engineering students. First, corpus data are presented to show that complex noun phrase formation is a ubiquitous feature of engineering text, and that these phrases (or collocations) are highly…

  15. Supporting Collocation Learning with a Digital Library

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…

  16. Approximation and modeling with ambient B-splines

    NASA Astrophysics Data System (ADS)

    Lehmann, N.; Maier, L.-B.; Odathuparambil, S.; Reif, U.

    2016-06-01

    We present a novel technique for solving approximation problems on manifolds in terms of standard tensor product B-splines. This method is easy to implement and provides optimal approximation order. Applications include the representation of smooth surfaces of arbitrary genus.

  17. Orthogonal collocation of the nonlinear Boltzman equation

    NASA Astrophysics Data System (ADS)

    Morin, T. J.; Hawley, M. C.

    1985-07-01

    A numerical solution to the nonlinear Boltzmann equation for Maxwell molecules, including the momentum conserving kernel by the method of orthogonal collocation, is presented and compared with the similarity solution of Krupp (1967), Bobylev (1975), Krook and Wu (1976) (KBKW). Excellent agreement is found between the two for KBKW initial values. The calculations of the evolution of a distribution function from nonKBKW initial conditions are examined. The correlation of the nonKBKW trajectories to the presence of a robust unstable manifold in the eigenspace of the linearized Boltzmann equation is considered. The results of a linear analysis are compared with the work of Wang Chang and Uhlenbeck (1952). The implications of the results for the relaxation of nonequilibrium distribution functions are discussed.

  18. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  19. Flexible coiled spline securely joins mating cylinders

    NASA Technical Reports Server (NTRS)

    Coppernol, R. W.

    1966-01-01

    Mating cylindrical members are joined by spline to form an integral structure. The spline is made of tightly coiled, high tensile-strength steel spiral wire that fits a groove between the mating members. It provides a continuous bearing surface for axial thrust between the members.

  20. Validation of significant wave height product from Envisat ASAR using triple collocation

    NASA Astrophysics Data System (ADS)

    Wang, H.; Shi, C. Y.; Zhu, J. H.; Huang, X. Q.; Chen, C. T.

    2014-03-01

    Nowadays, spaceborne Synthetic Aperture Radar (SAR) has become a powerful tool for providing significant wave height. Traditionally, validation of SAR derived ocean wave height has been carried out against buoy measurements or model outputs, which only yield a inter-comparison, but not an 'absolute' validation. In this study, the triple collocation error model has been introduced in the validation of Envisat ASAR level 2 data. Significant wave height data from ASAR were validated against in situ buoy data, and wave model hindcast results from WaveWatch III, covering a period of six years. The impact of the collocation distance on the error of ASAR wave height was discussed. From the triple collocation validation analysis, it is found that the error of Envisat ASAR significant wave height product is linear to the collocation distance, and decrease with the decreasing collocation distance. Using the linear regression fit method, the absolute error of Envisat ASAR wave height was obtained with zero collocation distance. The absolute Envisat ASAR wave height error of 0.49m is presented in deep and open ocean from this triple collocation validation work.

  1. Radial spline assembly for antifriction bearings

    NASA Technical Reports Server (NTRS)

    Moore, Jerry H. (Inventor)

    1993-01-01

    An outer race carrier is constructed for receiving an outer race of an antifriction bearing assembly. The carrier in turn is slidably fitted in an opening of a support wall to accommodate slight axial movements of a shaft. A plurality of longitudinal splines on the carrier are disposed to be fitted into matching slots in the opening. A deadband gap is provided between sides of the splines and slots, with a radial gap at ends of the splines and slots and a gap between the splines and slots sized larger than the deadband gap. With this construction, operational distortions (slope) of the support wall are accommodated by the larger radial gaps while the deadband gaps maintain a relatively high springrate of the housing. Additionally, side loads applied to the shaft are distributed between sides of the splines and slots, distributing such loads over a larger surface area than a race carrier of the prior art.

  2. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    SciTech Connect

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.; Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.

    2008-07-14

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensity that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting

  3. A comparison of thin-plate splines with automatic correspondences and B-splines with uniform grids for multimodal prostate registration

    NASA Astrophysics Data System (ADS)

    Mitra, Jhimli; Marti, Robert; Oliver, Arnau; Llado, Xavier; Vilanova, Joan C.; Meriaudeau, Fabrice

    2011-03-01

    This paper provides a comparison of spline-based registration methods applied to register interventional Trans Rectal Ultrasound (TRUS) and pre-acquired Magnetic Resonance (MR) prostate images for needle guided prostate biopsy. B-splines and Thin-plate Splines (TPS) are the most prevalent spline-based approaches to achieve deformable registration. Pertaining to the strategic selection of correspondences for the TPS registration, we use an automatic method already proposed in our previous work to generate correspondences in the MR and US prostate images. The method exploits the prostate geometry with the principal components of the segmented prostate as the underlying framework and involves a triangulation approach. The correspondences are generated with successive refinements and Normalized Mutual Information (NMI) is employed to determine the optimal number of correspondences required to achieve TPS registration. B-spline registration with successive grid refinements are consecutively applied for a significant comparison of the impact of the strategically chosen correspondences on the TPS registration against the uniform B-spline control grids. The experimental results are validated on 4 patient datasets. Dice Similarity Coefficient (DSC) is used as a measure of the registration accuracy. Average DSC values of 0.97+/-0.01 and 0.95+/-0.03 are achieved for the TPS and B-spline registrations respectively. B-spline registration is observed to be more computationally expensive than the TPS registration with average execution times of 128.09 +/- 21.7 seconds and 62.83 +/- 32.77 seconds respectively for images with maximum width of 264 pixels and a maximum height of 211 pixels.

  4. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  5. Combined Spline and B-spline for an improved automatic skin lesion segmentation in dermoscopic images using optimal color channel.

    PubMed

    Abbas, A A; Guo, X; Tan, W H; Jalab, H A

    2014-08-01

    In a computerized image analysis environment, the irregularity of a lesion border has been used to differentiate between malignant melanoma and other pigmented skin lesions. The accuracy of the automated lesion border detection is a significant step towards accurate classification at a later stage. In this paper, we propose the use of a combined Spline and B-spline in order to enhance the quality of dermoscopic images before segmentation. In this paper, morphological operations and median filter were used first to remove noise from the original image during pre-processing. Then we proceeded to adjust image RGB values to the optimal color channel (green channel). The combined Spline and B-spline method was subsequently adopted to enhance the image before segmentation. The lesion segmentation was completed based on threshold value empirically obtained using the optimal color channel. Finally, morphological operations were utilized to merge the smaller regions with the main lesion region. Improvement on the average segmentation accuracy was observed in the experimental results conducted on 70 dermoscopic images. The average accuracy of segmentation achieved in this paper was 97.21 % (where, the average sensitivity and specificity were 94 % and 98.05 % respectively).

  6. The Effect of Input Enhancement of Collocations in Reading on Collocation Learning and Retention of EFL Learners

    ERIC Educational Resources Information Center

    Goudarzi, Zahra; Moini, M. Raouf

    2012-01-01

    Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…

  7. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  8. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  9. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  10. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  11. The Impact of Corpus-Based Collocation Instruction on Iranian EFL Learners' Collocation Learning

    ERIC Educational Resources Information Center

    Ashouri, Shabnam; Arjmandi, Masoume; Rahimi, Ramin

    2014-01-01

    Over the past decades, studies of EFL/ESL vocabulary acquisition have identified the significance of collocations in language learning. Due to the fact that collocations have been regarded as one of the major concerns of both EFL teachers and learners for many years, the present study attempts to shed light on the impact of corpus-based…

  12. Higher order methods for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Murphy, J. D.; Prenter, P. M.

    This paper applies C1 cubic Hermite polynomials embedded in an orthogonal collocation scheme to the spatial discretization of the unsteady nonlinear Burgers equation as a model of the equations of fluid mechanics. The temporal discretization is carried out by means of either a noniterative finite difference or an iterative finite difference procedure. Results of this method are compared with those of a second-order finite difference scheme and a splined-cubic Taylor's series scheme. Stability limits are derived and the matrix structure of the several schemes are compared.

  13. Registration of sliding objects using direction dependent B-splines decomposition

    NASA Astrophysics Data System (ADS)

    Delmon, V.; Rit, S.; Pinho, R.; Sarrut, D.

    2013-03-01

    Sliding motion is a challenge for deformable image registration because it leads to discontinuities in the sought deformation. In this paper, we present a method to handle sliding motion using multiple B-spline transforms. The proposed method decomposes the sought deformation into sliding regions to allow discontinuities at their interfaces, but prevents unrealistic solutions by forcing those interfaces to match. The method was evaluated on 16 lung cancer patients against a single B-spline transform approach and a multi B-spline transforms approach without the sliding constraint at the interface. The target registration error (TRE) was significantly lower with the proposed method (TRE = 1.5 mm) than with the single B-spline approach (TRE = 3.7 mm) and was comparable to the multi B-spline approach without the sliding constraint (TRE = 1.4 mm). The proposed method was also more accurate along region interfaces, with 37% less gaps and overlaps when compared to the multi B-spline transforms without the sliding constraint. This work was presented in part at the 4th International Workshop on Pulmonary Image Analysis during the Medical Image Computing and Computer Assisted Intervention (MICCAI) in Toronto, Canada (2011).

  14. Data approximation using a blending type spline construction

    SciTech Connect

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.

  15. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  16. Spatial optimum collocation model of urban land and its algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangqiang; Li, Xinyun

    2007-06-01

    Optimizing the allocation of urban land is that layout and fix position the various types of land-use in space, maximize the overall benefits of urban space (including economic, social, environment) using a certain method and technique. There is two problems need to deal with in optimizing the allocation of urban land in the technique: one is the quantitative structure, the other is the space structure. In allusion to these problems, according to the principle of spatial coordination, a kind of new optimum collocation model about urban land was put forward in this text. In the model, we give a target function and a set of "soft" constraint conditions, and the area proportions of various types of land-use are restricted to the corresponding allowed scope. Spatial genetic algorithm is used to manipulate and calculate the space of urban land, the optimum spatial collocation scheme can be gradually approached, in which the three basic operations of reproduction, crossover and mutation are all operated on the space. Taking the built-up areas of Jinan as an example, we did the spatial optimum collocation experiment of urban land, the spatial aggregation of various types is better, and an approving result was got.

  17. Results of laser ranging collocations during 1983

    NASA Technical Reports Server (NTRS)

    Kolenkiewicz, R.

    1984-01-01

    The objective of laser ranging collocations is to compare the ability of two satellite laser ranging systems, located in the vicinity of one another, to measure the distance to an artificial Earth satellite in orbit over the sites. The similar measurement of this distance is essential before a new or modified laser system is deployed to worldwide locations in order to gather the data necessary to meet the scientific goals of the Crustal Dynamics Project. In order to be certain the laser systems are operating properly, they are periodically compared with each other. These comparisons or collocations are performed by locating the lasers side by side when they track the same satellite during the same time or pass. The data is then compared to make sure the lasers are giving essentially the same range results. Results of the three collocations performed during 1983 are given.

  18. High-frequency health data and spline functions.

    PubMed

    Martín-Rodríguez, Gloria; Murillo-Fort, Carlos

    2005-03-30

    Seasonal variations are highly relevant for health service organization. In general, short run movements of medical magnitudes are important features for managers in this field to make adequate decisions. Thus, the analysis of the seasonal pattern in high-frequency health data is an appealing task. The aim of this paper is to propose procedures that allow the analysis of the seasonal component in this kind of data by means of spline functions embedded into a structural model. In the proposed method, useful adaptions of the traditional spline formulation are developed, and the resulting procedures are capable of capturing periodic variations, whether deterministic or stochastic, in a parsimonious way. Finally, these methodological tools are applied to a series of daily emergency service demand in order to capture simultaneous seasonal variations in which periods are different.

  19. Spline-based procedures for dose-finding studies with active control

    PubMed Central

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-01

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931

  20. Penalized Spline: a General Robust Trajectory Model for ZIYUAN-3 Satellite

    NASA Astrophysics Data System (ADS)

    Pan, H.; Zou, Z.

    2016-06-01

    Owing to the dynamic imaging system, the trajectory model plays a very important role in the geometric processing of high resolution satellite imagery. However, establishing a trajectory model is difficult when only discrete and noisy data are available. In this manuscript, we proposed a general robust trajectory model, the penalized spline model, which could fit trajectory data well and smooth noise. The penalized parameter λ controlling the smooth and fitting accuracy could be estimated by generalized cross-validation. Five other trajectory models, including third-order polynomials, Chebyshev polynomials, linear interpolation, Lagrange interpolation and cubic spline, are compared with the penalized spline model. Both the sophisticated ephemeris and on-board ephemeris are used to compare the orbit models. The penalized spline model could smooth part of noise, and accuracy would decrease as the orbit length increases. The band-to-band misregistration of ZiYuan-3 Dengfeng and Faizabad multispectral images is used to evaluate the proposed method. With the Dengfeng dataset, the third-order polynomials and Chebyshev approximation could not model the oscillation, and introduce misregistration of 0.57 pixels misregistration in across-track direction and 0.33 pixels in along-track direction. With the Faizabad dataset, the linear interpolation, Lagrange interpolation and cubic spline model suffer from noise, introducing larger misregistration than the approximation models. Experimental results suggest the penalized spline model could model the oscillation and smooth noise.

  1. Gauging the Effects of Exercises on Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart

    2014-01-01

    Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials…

  2. Corpus-Based versus Traditional Learning of Collocations

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2015-01-01

    One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…

  3. Estimation of musculotendon kinematics in large musculoskeletal models using multidimensional B-Splines

    PubMed Central

    Sartori, Massimo; Reggiani, Monica; van den Bogert, Antonie J.; Lloyd, David G.

    2011-01-01

    We present a robust and computationally inexpensive method to estimate the lengths and three-dimensional moment arms for a large number of musculotendon actuators of the human lower limb. Using a musculoskeletal model of the lower extremity, a set of values was established for the length of each musculotendon actuator for different lower limb generalized coordinates (joint angles). A multidimensional spline function was then used to fit these data. Muscle moment arms were obtained by differentiating the musculotendon length spline function with respect to the generalized coordinate of interest. This new method was then compared to a previously used polynomial regression method. Compared to the polynomial regression method, the multidimensional spline method produced lower errors for estimating musculotendon lengths and moment arms throughout the whole generalized coordinate workspace. The fitting accuracy was also less affected by the number of dependent degrees of freedom and by the amount of experimental data available. The spline method only requires information on musculotendon lengths to estimate both musculotendon lengths and moment arms, thus relaxing data input requirements, whereas the polynomial regression requires different equations to be used for both musculotendon lengths and moment arms. Finally, we used the spline method in conjunction with an electromyography driven musculoskeletal model to estimate muscle forces under different contractile conditions, which showed the method is suitable for the integration into large scale neuromusculoskeletal models. PMID:22176708

  4. Spline-Screw Multiple-Rotation Mechanism

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Mechanism functions like combined robotic gripper and nut runner. Spline-screw multiple-rotation mechanism related to spline-screw payload-fastening system described in (GSC-13454). Incorporated as subsystem in alternative version of system. Mechanism functions like combination of robotic gripper and nut runner; provides both secure grip and rotary actuation of other parts of system. Used in system in which no need to make or break electrical connections to payload during robotic installation or removal of payload. More complicated version needed to make and break electrical connections. Mechanism mounted in payload.

  5. A space-time collocation scheme for modified anomalous subdiffusion and nonlinear superdiffusion equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.

    2016-01-01

    This paper reports a new spectral collocation technique for solving time-space modified anomalous subdiffusion equation with a nonlinear source term subject to Dirichlet and Neumann boundary conditions. This model equation governs the evolution for the probability density function that describes anomalously diffusing particles. Anomalous diffusion is ubiquitous in physical and biological systems where trapping and binding of particles can occur. A space-time Jacobi collocation scheme is investigated for solving such problem. The main advantage of the proposed scheme is that, the shifted Jacobi Gauss-Lobatto collocation and shifted Jacobi Gauss-Radau collocation approximations are employed for spatial and temporal discretizations, respectively. Thereby, the problem is successfully reduced to a system of algebraic equations. The numerical results obtained by this algorithm have been compared with various numerical methods in order to demonstrate the high accuracy and efficiency of the proposed method. Indeed, for relatively limited number of Gauss-Lobatto and Gauss-Radau collocation nodes imposed, the absolute error in our numerical solutions is sufficiently small. The results have been compared with other techniques in order to demonstrate the high accuracy and efficiency of the proposed method.

  6. New version of hex-ecs, the B-spline implementation of exterior complex scaling method for solution of electron-hydrogen scattering

    NASA Astrophysics Data System (ADS)

    Benda, Jakub; Houfek, Karel

    2016-07-01

    We provide an updated version of the program hex-ecs originally presented in Comput. Phys. Commun. 185 (2014) 2903-2912. The original version used an iterative method preconditioned by the incomplete LU factorization (ILU), which-though very stable and predictable-requires a large amount of working memory. In the new version we implemented a "separated electrons" (or "Kronecker product approximation", KPA) preconditioner as suggested by Bar-On et al., Appl. Num. Math. 33 (2000) 95-104. This preconditioner has much lower memory requirements, though in return it requires more iterations to reach converged results. By careful choice between ILU and KPA preconditioners one is able to extend the computational feasibility to larger calculations. Secondly, we added the option to run the KPA preconditioner on an OpenCL device (e.g. GPU). GPUs have generally better memory access times, which speeds up particularly the sparse matrix multiplication.

  7. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  8. A Spline Regression Model for Latent Variables

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.

    2014-01-01

    Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…

  9. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  10. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  11. Trigonometric quadratic B-spline subdomain Galerkin algorithm for the Burgers' equation

    NASA Astrophysics Data System (ADS)

    Ay, Buket; Dag, Idris; Gorgulu, Melis Zorsahin

    2015-12-01

    A variant of the subdomain Galerkin method has been set up to find numerical solutions of the Burgers' equation. Approximate function consists of the combination of the trigonometric B-splines. Integration of Burgers' equation has been achived by aid of the subdomain Galerkin method based on the trigonometric B-splines as an approximate functions. The resulting first order ordinary differential system has been converted into an iterative algebraic equation by use of the Crank-Nicolson method at successive two time levels. The suggested algorithm is tested on somewell-known problems for the Burgers' equation.

  12. The Effect of Taper Angle and Spline Geometry on the Initial Stability of Tapered, Splined Modular Titanium Stems.

    PubMed

    Pierson, Jeffery L; Small, Scott R; Rodriguez, Jose A; Kang, Michael N; Glassman, Andrew H

    2015-07-01

    Design parameters affecting initial mechanical stability of tapered, splined modular titanium stems (TSMTSs) are not well understood. Furthermore, there is considerable variability in contemporary designs. We asked if spline geometry and stem taper angle could be optimized in TSMTS to improve mechanical stability to resist axial subsidence and increase torsional stability. Initial stability was quantified with stems of varied taper angle and spline geometry implanted in a foam model replicating 2cm diaphyseal engagement. Increased taper angle and a broad spline geometry exhibited significantly greater axial stability (+21%-269%) than other design combinations. Neither taper angle nor spline geometry significantly altered initial torsional stability. PMID:25754255

  13. Discrimination of bed form scales using robust spline filters and wavelet transforms: Methods and application to synthetic signals and bed forms of the Río Paraná, Argentina

    NASA Astrophysics Data System (ADS)

    Gutierrez, Ronald R.; Abad, Jorge D.; Parsons, Daniel R.; Best, James L.

    2013-09-01

    There is no standard nomenclature and procedure to systematically identify the scale and magnitude of bed forms such as bars, dunes, and ripples that are commonly present in many sedimentary environments. This paper proposes a standardization of the nomenclature and symbolic representation of bed forms and details the combined application of robust spline filters and continuous wavelet transforms to discriminate these morphodynamic features, allowing the quantitative recognition of bed form hierarchies. Herein the proposed methodology for bed form discrimination is first applied to synthetic bed form profiles, which are sampled at a Nyquist ratio interval of 2.5-50 and a signal-to-noise ratio interval of 1-20 and subsequently applied to a detailed 3-D bed topography from the Río Paraná, Argentina, which exhibits large-scale dunes with superimposed, smaller bed forms. After discriminating the synthetic bed form signals into three-bed form hierarchies that represent bars, dunes, and ripples, the accuracy of the methodology is quantified by estimating the reproducibility, the cross correlation, and the standard deviation ratio of the actual and retrieved signals. For the case of the field measurements, the proposed method is used to discriminate small and large dunes and subsequently obtain and statistically analyze the common morphological descriptors such as wavelength, slope, and amplitude of both stoss and lee sides of these different size bed forms. Analysis of the synthetic signals demonstrates that the Morlet wavelet function is the most efficient in retrieving smaller periodicities such as ripples and smaller dunes and that the proposed methodology effectively discriminates waves of different periods for Nyquist ratios higher than 25 and signal-to-noise ratios higher than 5. The analysis of bed forms in the Río Paraná reveals that, in most cases, a Gamma probability distribution, with a positive skewness, best describes the dimensionless wavelength and

  14. Probabilistic collocation for simulation-based robust concept exploration

    NASA Astrophysics Data System (ADS)

    Rippel, Markus; Choi, Seung-Kyum; Allen, Janet K.; Mistree, Farrokh

    2012-08-01

    In the early stages of an engineering design process it is necessary to explore the design space to find a feasible range that satisfies design requirements. When robustness of the system is among the requirements, the robust concept exploration method can be used. In this method, a global metamodel, such as a global response surface of the design space, is used to evaluate robustness. However, for large design spaces, this is computationally expensive and may be relatively inaccurate for some local regions. In this article, a method is developed for successively generating local response models at points of interest as the design space is explored. This approach is based on the probabilistic collocation method. Although the focus of this article is on the method, it is demonstrated using an artificial performance function and a linear cellular alloy heat exchanger. For these problems, this approach substantially reduces computation time while maintaining accuracy.

  15. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  16. Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling

    NASA Astrophysics Data System (ADS)

    Wu, Xiaofang; Shi, Jianghong

    In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.

  17. Spline-Screw Payload-Fastening System

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1994-01-01

    Payload handed off securely between robot and vehicle or structure. Spline-screw payload-fastening system includes mating female and male connector mechanisms. Clockwise (or counter-clockwise) rotation of splined male driver on robotic end effector causes connection between robot and payload to tighten (or loosen) and simultaneously causes connection between payload and structure to loosen (or tighten). Includes mechanisms like those described in "Tool-Changing Mechanism for Robot" (GSC-13435) and "Self-Aligning Mechanical and Electrical Coupling" (GSC-13430). Designed for use in outer space, also useful on Earth in applications needed for secure handling and secure mounting of equipment modules during storage, transport, and/or operation. Particularly useful in machine or robotic applications.

  18. Curvilinear bicubic spline fit interpolation scheme

    NASA Technical Reports Server (NTRS)

    Chi, C.

    1973-01-01

    Modification of the rectangular bicubic spline fit interpolation scheme so as to make it suitable for use with a polar grid pattern. In the proposed modified scheme the interpolation function is expressed in terms of the radial length and the arc length, and the shape of the patch, which is a wedge or a truncated wedge, is taken into account implicitly. Examples are presented in which the proposed interpolation scheme was used to reproduce the equations of a hemisphere.

  19. B-spline calculations of oscillator strengths in noble gases.

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Bartschat, Klaus

    2006-05-01

    The B-spline box-based close-coupling method [1] was applied for extensive calculations of the transition probabilities in the noble gases Ne, Ar, Kr and Xe for energy levels up to n = 12. An individually optimized, term-dependent set of non-orthogonal one-electron radial functions was used to account for the strong term dependence in the valence orbitals. The core-valence correlation was introduced through multi-channel expansions, which include the ns^2np^5, nsnp^6 and ns^2np^4(n+1)l target states. The inner-core correlation was accounted for by employing multi-configuration target states. Energy levels and oscillator strengths for transitions from the np^6 ground-state configuration as well as transitions between excited states were computed in the Breit-Pauli approximation. The inner-core correlation was found to be very important for most of the transitions considered. The good agreement with the available experimental data shows that the B-spline method can be used for accurate calculations of oscillator strengths for states with intermediate n-values, i.e. exactly the region where it is difficult to apply standard MCHF methods. At the same time the accuracy for the low-lying states is close to the accuracy obtained in large-scale MCHF calculations [2]. [1] O. Zatsarinny and C. Froese Fischer, J. Phys. B 35, 4669 (2002). [2] A. Irimia and C. Froese Fischer, J. Phys. B 37, 1659 (2004).

  20. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  1. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  2. On the spline-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a spline-based wavelet basis is constructed. Given an n-th order spline basis it is proved that the differentiation matrix is accurate of order 2n + 2 when periodic boundary conditions are assumed. This high accuracy, or superconvergence, is lost when the boundary conditions are no longer periodic. Furthermore, it is shown that spline-based bases generate a class of compact finite difference schemes.

  3. Profiling the Collocation Use in ELT Textbooks and Learner Writing

    ERIC Educational Resources Information Center

    Tsai, Kuei-Ju

    2015-01-01

    The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…

  4. Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks

    ERIC Educational Resources Information Center

    Menon, Sujatha; Mukundan, Jayakaran

    2012-01-01

    This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…

  5. Local Adaptive Calibration of the GLASS Surface Incident Shortwave Radiation Product Using Smoothing Spline

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Liang, S.; Wang, G.

    2015-12-01

    Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.

  6. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    SciTech Connect

    Ruberti, M.; Averbukh, V.; Decleva, P.

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.

  7. Fatigue crack growth monitoring of idealized gearbox spline component using acoustic emission

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Ozevin, Didem; Hardman, William; Kessler, Seth; Timmons, Alan

    2016-04-01

    The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. The acoustic emission (AE) method is a direct way of detecting active flaws; however, the method suffers from the influence of background noise and location/sensor based pattern recognition method. It is important to identify the source mechanism and adapt it to different test conditions and sensors. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method in a laboratory environment. The test sample has the major details of the spline component on a flattened geometry. The AE data is continuously collected together with strain gauges strategically positions on the structure. The fatigue test characteristics are 4 Hz frequency and 0.1 as the ratio of minimum to maximum loading in tensile regime. It is observed that there are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The frequency spectra of continuous emissions and burst emissions are compared to understand the difference of sudden crack growth and gradual crack growth. The predicted crack growth rate is compared with the AE data using the cumulative AE events at the notch tip. The source mechanism of sudden crack growth is obtained solving the inverse mathematical problem from output signal to input signal. The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method The AE data is continuously collected together with strain gauges. There are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The source mechanism of

  8. Quantitative coronary angiography with deformable spline models.

    PubMed

    Klein, A K; Lee, F; Amini, A A

    1997-10-01

    Although current edge-following schemes can be very efficient in determining coronary boundaries, they may fail when the feature to be followed is disconnected (and the scheme is unable to bridge the discontinuity) or branch points exist where the best path to follow is indeterminate. In this paper, we present new deformable spline algorithms for determining vessel boundaries, and enhancing their centerline features. A bank of even and odd S-Gabor filter pairs of different orientations are convolved with vascular images in order to create an external snake energy field. Each filter pair will give maximum response to the segment of vessel having the same orientation as the filters. The resulting responses across filters of different orientations are combined to create an external energy field for snake optimization. Vessels are represented by B-Spline snakes, and are optimized on filter outputs with dynamic programming. The points of minimal constriction and the percent-diameter stenosis are determined from a computed vessel centerline. The system has been statistically validated using fixed stenosis and flexible-tube phantoms. It has also been validated on 20 coronary lesions with two independent operators, and has been tested for interoperator and intraoperator variability and reproducibility. The system has been found to be specially robust in complex images involving vessel branchings and incomplete contrast filling.

  9. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    NASA Astrophysics Data System (ADS)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and

  10. Color management with a hammer: the B-spline fitter

    NASA Astrophysics Data System (ADS)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  11. Multiresolution Analysis of UTAT B-spline Curves

    NASA Astrophysics Data System (ADS)

    Lamnii, A.; Mraoui, H.; Sbibih, D.; Zidna, A.

    2011-09-01

    In this paper, we describe a multiresolution curve representation based on periodic uniform tension algebraic trigonometric (UTAT) spline wavelets of class ??? and order four. Then we determine the decomposition and the reconstruction vectors corresponding to UTAT-spline spaces. Finally, we give some applications in order to illustrate the efficiency of the proposed approach.

  12. Convexity preserving C2 rational quadratic trigonometric spline

    NASA Astrophysics Data System (ADS)

    Dube, Mridula; Tiwari, Preeti

    2012-09-01

    A C2 rational quadratic trigonometric spline interpolation has been studied using two kind of rational quadratic trigonometric splines. It is shown that under some natural conditions the solution of the problem exits and is unique. The necessary and sufficient condition that constrain the interpolation curves to be convex in the interpolating interval or subinterval are derived.

  13. Direct optimization method for reentry trajectory design

    NASA Astrophysics Data System (ADS)

    Jallade, S.; Huber, P.; Potti, J.; Dutruel-Lecohier, G.

    The software package called `Reentry and Atmospheric Transfer Trajectory' (RATT) was developed under ESA contract for the design of atmospheric trajectories. It includes four software TOP (Trajectory OPtimization) programs, which optimize reentry and aeroassisted transfer trajectories. 6FD and 3FD (6 and 3 degrees of freedom Flight Dynamic) are devoted to the simulation of the trajectory. SCA (Sensitivity and Covariance Analysis) performs covariance analysis on a given trajectory with respect to different uncertainties and error sources. TOP provides the optimum guidance law of a three degree of freedom reentry of aeroassisted transfer (AAOT) trajectories. Deorbit and reorbit impulses (if necessary) can be taken into account in the optimization. A wide choice of cost function is available to the user such as the integrated heat flux, or the sum of the velocity impulses, or a linear combination of both of them for trajectory and vehicle design. The crossrange and the downrange can be maximized during reentry trajectory. Path constraints are available on the load factor, the heat flux and the dynamic pressure. Results on these proposed options are presented. TOPPHY is the part of the TOP software corresponding to the definition and the computation of the optimization problemphysics. TOPPHY can interface with several optimizes with dynamic solvers: TOPOP and TROPIC using direct collocation methods and PROMIS using direct multiple shooting method. TOPOP was developed in the frame of this contract, it uses Hermite polynomials for the collocation method and the NPSOL optimizer from the NAG library. Both TROPIC and PROMIS were developed by the DLR (Deutsche Forschungsanstalt fuer Luft und Raumfahrt) and use the SLSQP optimizer. For the dynamic equation resolution, TROPIC uses a collocation method with Splines and PROMIS uses a multiple shooting method with finite differences. The three different optimizers including dynamics were tested on the reentry trajectory of the

  14. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    SciTech Connect

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  15. The Learning Burden of Collocations: The Role of Interlexical and Intralexical Factors

    ERIC Educational Resources Information Center

    Peters, Elke

    2016-01-01

    This study investigates whether congruency (+/- literal translation equivalent), collocate-node relationship (adjective-noun, verb-noun, phrasal-verb-noun collocations), and word length influence the learning burden of EFL learners' learning collocations at the initial stage of form-meaning mapping. Eighteen collocations were selected on the basis…

  16. Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting

    2016-01-01

    The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…

  17. The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners

    ERIC Educational Resources Information Center

    Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd

    2011-01-01

    An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge…

  18. A space-time spectral collocation algorithm for the variable order fractional wave equation.

    PubMed

    Bhrawy, A H; Doha, E H; Alzaidy, J F; Abdelkawy, M A

    2016-01-01

    The variable order wave equation plays a major role in acoustics, electromagnetics, and fluid dynamics. In this paper, we consider the space-time variable order fractional wave equation with variable coefficients. We propose an effective numerical method for solving the aforementioned problem in a bounded domain. The shifted Jacobi polynomials are used as basis functions, and the variable-order fractional derivative is described in the Caputo sense. The proposed method is a combination of shifted Jacobi-Gauss-Lobatto collocation scheme for the spatial discretization and the shifted Jacobi-Gauss-Radau collocation scheme for temporal discretization. The aforementioned problem is then reduced to a problem consists of a system of easily solvable algebraic equations. Finally, numerical examples are presented to show the effectiveness of the proposed numerical method. PMID:27536504

  19. Spline-locking screw fastening strategy

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1992-01-01

    A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotics or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced space manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.

  20. Spline-Locking Screw Fastening Strategy (SLSFS)

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    1991-01-01

    A fastener was developed by NASA Goddard for efficiently performing assembly, maintenance, and equipment replacement functions in space using either robotic or astronaut means. This fastener, the 'Spline Locking Screw' (SLS) would also have significant commercial value in advanced manufacturing. Commercial (or DoD) products could be manufactured in such a way that their prime subassemblies would be assembled using SLS fasteners. This would permit machines and robots to disconnect and replace these modules/parts with ease, greatly reducing life cycle costs of the products and greatly enhancing the quality, timeliness, and consistency of repairs, upgrades, and remanufacturing. The operation of the basic SLS fastener is detailed, including hardware and test results. Its extension into a comprehensive fastening strategy for NASA use in space is also outlined. Following this, the discussion turns toward potential commercial and government applications and the potential market significance of same.

  1. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX*

    PubMed Central

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    2015-01-01

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801

  2. The spline probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Sithiravel, Rajiv; Tharmarasa, Ratnasingham; McDonald, Mike; Pelletier, Michel; Kirubarajan, Thiagalingam

    2012-06-01

    The Probability Hypothesis Density Filter (PHD) is a multitarget tracker for recursively estimating the number of targets and their state vectors from a set of observations. The PHD filter is capable of working well in scenarios with false alarms and missed detections. Two distinct PHD filter implementations are available in the literature: the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filters. The SMC-PHD filter uses particles to provide target state estimates, which can lead to a high computational load, whereas the GM-PHD filter does not use particles, but restricts to linear Gaussian mixture models. The SMC-PHD filter technique provides only weighted samples at discrete points in the state space instead of a continuous estimate of the probability density function of the system state and thus suffers from the well-known degeneracy problem. This paper proposes a B-Spline based Probability Hypothesis Density (S-PHD) filter, which has the capability to model any arbitrary probability density function. The resulting algorithm can handle linear, non-linear, Gaussian, and non-Gaussian models and the S-PHD filter can also provide continuous estimates of the probability density function of the system state. In addition, by moving the knots dynamically, the S-PHD filter ensures that the splines cover only the region where the probability of the system state is significant, hence the high efficiency of the S-PHD filter is maintained at all times. Also, unlike the SMC-PHD filter, the S-PHD filter is immune to the degeneracy problem due to its continuous nature. The S-PHD filter derivations and simulations are provided in this paper.

  3. Usability Study of Two Collocated Prototype System Displays

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2007-01-01

    Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.

  4. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  5. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  6. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  7. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.; Franco-Villoria, M.

    2014-06-01

    The quantification of measurement uncertainty of atmospheric parameters is a key factor in assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical contributions to the uncertainty budget is related to the collocation mismatch in space and time among observations made at different locations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or lidar. In this paper we propose a statistical modelling approach capable of explaining the relationship between collocation uncertainty and a set of environmental factors, height and distance between imperfectly collocated trajectories. The new statistical approach is based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows a natural definition of uncertainty profiles. Along this line, a five-fold decomposition of the total collocation uncertainty is proposed, giving both a profile budget and an integrated column budget. HFR is a data-driven approach valid for any atmospheric parameter, which can be assumed smooth. It is illustrated here by means of the collocation uncertainty analysis of relative humidity from two stations involved in the GCOS reference upper-air network (GRUAN). In this case, 85% of the total collocation uncertainty is ascribed to reducible environmental error, 11% to irreducible environmental error, 3.4% to adjustable bias, 0.1% to sampling error and 0.2% to measurement error.

  8. A Simple and Fast Spline Filtering Algorithm for Surface Metrology

    PubMed Central

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement. PMID:26958443

  9. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.

  10. Subcell resolution in simplex stochastic collocation for spatial discontinuities

    SciTech Connect

    Witteveen, Jeroen A.S.; Iaccarino, Gianluca

    2013-10-15

    Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC–SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers’ equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC–SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.

  11. Single-grid spectral collocation for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte

    1988-01-01

    The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.

  12. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    PubMed

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  13. Left ventricular motion reconstruction with a prolate spheroidal B-spline model

    NASA Astrophysics Data System (ADS)

    Li, Jin; Denney, Thomas S., Jr.

    2006-02-01

    Tagged cardiac magnetic resonance (MR) imaging can non-invasively image deformation of the left ventricular (LV) wall. Three-dimensional (3D) analysis of tag data requires fitting a deformation model to tag lines in the image data. In this paper, we present a 3D myocardial displacement and strain reconstruction method based on a B-spline deformation model defined in prolate spheroidal coordinates, which more closely matches the shape of the LV wall than existing Cartesian or cylindrical coordinate models. The prolate spheroidal B-spline (PSB) deformation model also enforces smoothness across and can compute strain at the apex. The PSB reconstruction algorithm was evaluated on a previously published data set to allow head-to-head comparison of the PSB model with existing LV deformation reconstruction methods. We conclude that the PSB method can accurately reconstruct deformation and strain in the LV wall from tagged MR images and has several advantages relative to existing techniques.

  14. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  15. Continuous Groundwater Monitoring Collocated at USGS Streamgages

    NASA Astrophysics Data System (ADS)

    Constantz, J. E.; Eddy-Miller, C.; Caldwell, R.; Wheeer, J.; Barlow, J.

    2012-12-01

    USGS Office of Groundwater funded a 2-year pilot study collocating groundwater wells for monitoring water level and temperature at several existing continuous streamgages in Montana and Wyoming, while U.S. Army Corps of Engineers funded enhancement to streamgages in Mississippi. To increase spatial relevance with in a given watershed, study sites were selected where near-stream groundwater was in connection with an appreciable aquifer, and where logistics and cost of well installations were considered representative. After each well installation and surveying, groundwater level and temperature were easily either radio-transmitted or hardwired to existing data acquisition system located in streamgaging shelter. Since USGS field personnel regularly visit streamgages during routine streamflow measurements and streamgage maintenance, the close proximity of observation wells resulted in minimum extra time to verify electronically transmitted measurements. After field protocol was tuned, stream and nearby groundwater information were concurrently acquired at streamgages and transmitted to satellite from seven pilot-study sites extending over nearly 2,000 miles (3,200 km) of the central US from October 2009 until October 2011, for evaluating the scientific and engineering add-on value of the enhanced streamgage design. Examination of the four-parameter transmission from the seven pilot study groundwater gaging stations reveals an internally consistent, dynamic data suite of continuous groundwater elevation and temperature in tandem with ongoing stream stage and temperature data. Qualitatively, the graphical information provides appreciation of seasonal trends in stream exchanges with shallow groundwater, as well as thermal issues of concern for topics ranging from ice hazards to suitability of fish refusia, while quantitatively this information provides a means for estimating flux exchanges through the streambed via heat-based inverse-type groundwater modeling. In June

  16. Applications of B-splines in atomic and molecular physics

    NASA Astrophysics Data System (ADS)

    Bachau, H.; Cormier, E.; Decleva, P.; Hansen, J. E.; Martín, F.

    2001-12-01

    One of the most significant developments in computational atomic and molecular physics in recent years has been the introduction of B-spline basis sets in calculations of atomic and molecular structure and dynamics. B-splines were introduced in applied mathematics more than 50 years ago, but it has been in the 1990s, with the advent of powerful computers, that the number of applications has grown exponentially. In this review we present the main properties of B-splines and discuss why they are useful to solve different problems in atomic and molecular physics. We provide an extensive reference list of theoretical works that have made use of B-spline basis sets up to 2000. Among these, we have focused on those applications that have led to the discovery of new interesting phenomena and pointed out the reasons behind the success of the approach.

  17. Stable coupling between vector and scalar variables for the IDO scheme on collocated grids

    NASA Astrophysics Data System (ADS)

    Imai, Yohsuke; Aoki, Takayuki

    2006-06-01

    The Interpolated Differential Operator (IDO) scheme on collocated grids provides fourth-order discretizations for all the terms of the fluid flow equations. However, computations of fluid flows on collocated grids are not guaranteed to produce accurate solutions because of the poor coupling between velocity vector and scalar variables. A stable coupling method for the IDO scheme on collocated grids is proposed, where a new representation of first-order derivatives is adopted. It is important in deriving the representation to refer to the variables at neighboring grid points, keeping fourth-order truncation error. It is clear that accuracy and stability are drastically improved for shallow water equations in comparison with the conventional IDO scheme. The effects of the stable coupling are confirmed in incompressible flow calculations for DNS of turbulence and a driven cavity problem. The introduction of a rational function into the proposed method makes it possible to calculate shock waves with the initial conditions of extreme density and pressure jumps.

  18. Recent advances in (soil moisture) triple collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Zwieback, S.; Crow, W.; Dorigo, W.; Wagner, W.

    2016-03-01

    To date, triple collocation (TC) analysis is one of the most important methods for the global-scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method. Different notations that are used to formulate the TC problem are shown to be mathematically identical. While many studies have investigated issues related to possible violations of the underlying assumptions, only few TC modifications have been proposed to mitigate the impact of these violations. Moreover, assumptions, which are often understood as a limitation that is unique to TC analysis are shown to be common also to other conventional performance metrics. Noteworthy advances in TC analysis have been made in the way error estimates are being presented by moving from the investigation of absolute error variance estimates to the investigation of signal-to-noise ratio (SNR) metrics. Here we review existing error presentations and propose the combined investigation of the SNR (expressed in logarithmic units), the unscaled error variances, and the soil moisture sensitivities of the data sets as an optimal strategy for the evaluation of remotely-sensed soil moisture data sets.

  19. Analysis of myocardial motion using generalized spline models and tagged magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Rose, Stephen E.; Wilson, Stephen J.; Veidt, Martin; Bennett, Cameron J.; Doddrell, David M.

    2000-06-01

    Heart wall motion abnormalities are the very sensitive indicators of common heart diseases, such as myocardial infarction and ischemia. Regional strain analysis is especially important in diagnosing local abnormalities and mechanical changes in the myocardium. In this work, we present a complete method for the analysis of cardiac motion and the evaluation of regional strain in the left ventricular wall. The method is based on the generalized spline models and tagged magnetic resonance images (MRI) of the left ventricle. The whole method combines dynamical tracking of tag deformation, simulating cardiac movement and accurately computing the regional strain distribution. More specifically, the analysis of cardiac motion is performed in three stages. Firstly, material points within the myocardium are tracked over time using a semi-automated snake-based tag tracking algorithm developed for this purpose. This procedure is repeated in three orthogonal axes so as to generate a set of one-dimensional sample measurements of the displacement field. The 3D-displacement field is then reconstructed from this sample set by using a generalized vector spline model. The spline reconstruction of the displacement field is explicitly expressed as a linear combination of a spline kernel function associated with each sample point and a polynomial term. Finally, the strain tensor (linear or nonlinear) with three direct components and three shear components is calculated by applying a differential operator directly to the displacement function. The proposed method is computationally effective and easy to perform on tagged MR images. The preliminary study has shown potential advantages of using this method for the analysis of myocardial motion and the quantification of regional strain.

  20. Optimal rocket thrust profile shaping using third degree spline function interpolation

    NASA Technical Reports Server (NTRS)

    Johnson, I. L.

    1974-01-01

    Optimal solid-rocket thrust profiles for the parallel-burn, solid-rocket-assisted space shuttle are investigated. Solid-rocket thrust profiles are simulated by using third-degree spline functions, with the values of the thrust ordinates defined as parameters. The profiles are optimized parametrically, using the Davidon-Fletcher-Powell penalty function method, by minimizing propellant weight subject to state and control inequality constraints and to terminal boundary conditions. This study shows that optimizing a control variable parametrically by using third-degree spline function interpolation allows the control to be shaped so that inequality constraints are strictly adhered to and all corners are eliminated. The absence of corners, which is realistic in nature, makes this method attractive from the viewpoint of solid rocket grain design.

  1. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  2. Collocation and Pattern Recognition Effects on System Failure Remediation

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Press, Hayes N.

    2007-01-01

    Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.

  3. B-spline parameterization of spatial response in a monolithic scintillation camera

    NASA Astrophysics Data System (ADS)

    Solovov, V.; Morozov, A.; Chepel, V.; Domingos, V.; Martins, R.

    2016-09-01

    A framework for parameterization of the light response functions (LRFs) in a scintillation camera is presented. It is based on approximation of the measured or simulated photosensor response with weighted sums of uniform cubic B-splines or their tensor products. The LRFs represented in this way are smooth, computationally inexpensive to evaluate and require much less computer memory than non-parametric alternatives. The parameters are found in a straightforward way by the linear least squares method. Several techniques that allow to reduce the storage and processing power requirements were developed. A software library for fitting simulated and measured light response with spline functions was developed and integrated into an open source software package ANTS2 designed for simulation and data processing for Anger camera type detectors.

  4. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2015-03-01

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov's proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  5. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    SciTech Connect

    Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  6. A new wavelet-based thin plate element using B-spline wavelet on the interval

    NASA Astrophysics Data System (ADS)

    Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang

    2008-01-01

    By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.

  7. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.

    2013-08-01

    The uncertainty of important atmospheric parameters is a key factor for assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical points of the uncertainty budget is related to the collocation mismatch in space and time among different observations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or LIDAR. In this paper we consider a statistical modelling approach to understand at which extent collocation uncertainty is related to environmental factors, height and distance between the trajectories. To do this we introduce a new statistical approach, based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows us a natural definition of uncertainty profiles. Moreover, using this modelling approach, a five-folded uncertainty decomposition is proposed. Eventually, the HFR approach is illustrated by the collocation uncertainty analysis of relative humidity from two stations involved in GCOS reference upper-air network (GRUAN).

  8. A flexible B-spline model for multiple longitudinal biomarkers and survival.

    PubMed

    Brown, Elizabeth R; Ibrahim, Joseph G; DeGruttola, Victor

    2005-03-01

    Often when jointly modeling longitudinal and survival data, we are interested in a multivariate longitudinal measure that may not fit well by linear models. To overcome this problem, we propose a joint longitudinal and survival model that has a nonparametric model for the longitudinal markers. We use cubic B-splines to specify the longitudinal model and a proportional hazards model to link the longitudinal measures to the hazard. To fit the model, we use a Markov chain Monte Carlo algorithm. We select the number of knots for the cubic B-spline model using the Conditional Predictive Ordinate (CPO) and the Deviance Information Criterion (DIC). The method and model selection approach are validated in a simulation. We apply this method to examine the link between viral load, CD4 count, and time to event in data from an AIDS clinical trial. The cubic B-spline model provides a good fit to the longitudinal data that could not be obtained with simple parametric models. PMID:15737079

  9. Smoothing spline ANOVA frailty model for recurrent event data.

    PubMed

    Du, Pang; Jiang, Yihua; Wang, Yuedong

    2011-12-01

    Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data.

  10. User's guide for Wilson-Fowler spline software: SPLPKG, WFCMPR, WFAPPX - CADCAM-010

    SciTech Connect

    Fletcher, S.K.

    1985-02-01

    The Wilson-Fowler spline is widely used in computer aided manufacturing, but is not available in all commercial CAD/CAM systems. These three programs provide a capability for generating, comparing, and approximating Wilson-Fowler splines. SPLPKG generates a spline passing through given nodes, and computes a piecewise linear approximation to the spline. WFCMPR computes the difference between two splines with common nodes. WFAPPX computes the difference between a spline and a piecewise linear curve. The programs are in Fortran 77 and are machine independent.

  11. A mixed basis density functional approach for one-dimensional systems with B-splines

    NASA Astrophysics Data System (ADS)

    Ren, Chung-Yuan; Chang, Yia-Chung; Hsue, Chen-Shiung

    2016-05-01

    A mixed basis approach based on density functional theory is extended to one-dimensional (1D) systems. The basis functions here are taken to be the localized B-splines for the two finite non-periodic dimensions and the plane waves for the third periodic direction. This approach will significantly reduce the number of the basis and therefore is computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian. For 1D systems, B-spline polynomials are particularly useful and efficient in two-dimensional spatial integrations involved in the calculations because of their absolute localization. Moreover, B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With such a basis set we can directly calculate the total energy of the isolated system instead of using the conventional supercell model with artificial vacuum regions among the replicas along the two non-periodic directions. The spurious Coulomb interaction between the charged defect and its repeated images by the supercell approach for charged systems can also be avoided. A rigorous formalism for the long-range Coulomb potential of both neutral and charged 1D systems under the mixed basis scheme will be derived. To test the present method, we apply it to study the infinite carbon-dimer chain, graphene nanoribbon, carbon nanotube and positively-charged carbon-dimer chain. The resulting electronic structures are presented and discussed in detail.

  12. Algebraic grid generation using tensor product B-splines. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Saunders, B. V.

    1985-01-01

    Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.

  13. Collocational Strategies of Arab Learners of English: A Study in Lexical Semantics.

    ERIC Educational Resources Information Center

    Muhammad, Raji Zughoul; Abdul-Fattah, Hussein S.

    Arab learners of English encounter a serious problem with collocational sequences. The present study purports to determine the extent to which university English language majors can use English collocations properly. A two-form translation test of 16 Arabic collocations was administered to both graduate and undergraduate students of English. The…

  14. Beyond Single Words: The Most Frequent Collocations in Spoken English

    ERIC Educational Resources Information Center

    Shin, Dongkwang; Nation, Paul

    2008-01-01

    This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section…

  15. Redefining Creativity--Analyzing Definitions, Collocations, and Consequences

    ERIC Educational Resources Information Center

    Kampylis, Panagiotis G.; Valtanen, Juri

    2010-01-01

    How holistically is human creativity defined, investigated, and understood? Until recently, most scientific research on creativity has focused on its positive side. However, creativity might not only be a desirable resource but also be a potential threat. In order to redefine creativity we need to analyze and understand definitions, collocations,…

  16. Beyond triple collocation: Applications to satellite soil moisture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...

  17. Simple spline-function equations for fracture mechanics calculations

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1979-01-01

    The paper presents simple spline-function equations for fracture mechanics calculations. A spline function is a sequence of piecewise polynomials of degree n greater than 1 whose coefficients are such that the function and its first n-1 derivatives are continuous. Second-degree spline equations are presented for the compact, three point bend, and crack-line wedge-loaded specimens. Some expressions can be used directly, so that for a cyclic crack propagation test using a compact specimen, the equation given allows the cracklength to be calculated from the slope of the load-displacement curve. For an R-curve test, equations allow the crack length and stress intensity factor to be calculated from the displacement and the displacement ratio.

  18. AnL1 smoothing spline algorithm with cross validation

    NASA Astrophysics Data System (ADS)

    Bosworth, Ken W.; Lall, Upmanu

    1993-08-01

    We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.

  19. How to fly an aircraft with control theory and splines

    NASA Technical Reports Server (NTRS)

    Karlsson, Anders

    1994-01-01

    When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.

  20. The radial basis function finite collocation approach for capturing sharp fronts in time dependent advection problems

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Power, H.

    2015-10-01

    We propose a node-based local meshless method for advective transport problems that is capable of operating on centrally defined stencils and is suitable for shock-capturing purposes. High spatial convergence rates can be achieved; in excess of eighth-order in some cases. Strongly-varying smooth profiles may be captured at infinite Péclet number without instability, and for discontinuous profiles the solution exhibits neutrally stable oscillations that can be damped by introducing a small artificial diffusion parameter, allowing a good approximation to the shock-front to be maintained for long travel times without introducing spurious oscillations. The proposed method is based on local collocation with radial basis functions (RBFs) in a "finite collocation" configuration. In this approach the PDE governing and boundary equations are enforced directly within the local RBF collocation systems, rather than being reconstructed from fixed interpolating functions as is typical of finite difference, finite volume or finite element methods. In this way the interpolating basis functions naturally incorporate information from the governing PDE, including the strength and direction of the convective velocity field. By using these PDE-enhanced interpolating functions an "implicit upwinding" effect is achieved, whereby the flow of information naturally respects the specifics of the local convective field. This implicit upwinding effect allows high-convergence solutions to be obtained on centred stencils for advection problems. The method is formulated using a high-convergence implicit timestepping algorithm based on Richardson extrapolation. The spatial and temporal convergence of the proposed approach is demonstrated using smooth functions with large gradients. The capture of discontinuities is then investigated, showing how the addition of a dynamic stabilisation parameter can damp the neutrally stable oscillations with limited smearing of the shock front.

  1. L1 Influence on the Acquisition of L2 Collocations: Japanese ESL Users and EFL Learners Acquiring English Collocations

    ERIC Educational Resources Information Center

    Yamashita, Junko; Jiang, Nan

    2010-01-01

    This study investigated first language (L1) influence on the acquisition of second language (L2) collocations using a framework based on Kroll and Stewart (1994) and Jiang (2000), by comparing the performance on a phrase-acceptability judgment task among native speakers of English, Japanese English as a second language (ESL) users, and Japanese…

  2. Temi firthiani di linguistica applicata: "Restricted Languages" e "Collocation" (Firthian Themes in Applied Linguistics: "Restricted Languages" and "Collocation")

    ERIC Educational Resources Information Center

    Leonardi, Magda

    1977-01-01

    Discusses the importance of two Firthian themes for language teaching. The first theme, "Restricted Languages," concerns the "microlanguages" of every language (e.g., literary language, scientific, etc.). The second theme, "Collocation," shows that equivalent words in two languages rarely have the same position in both languages. (Text is in…

  3. Numerical solution of the controlled Duffing oscillator by semi-orthogonal spline wavelets

    NASA Astrophysics Data System (ADS)

    Lakestani, M.; Razzaghi, M.; Dehghan, M.

    2006-09-01

    This paper presents a numerical method for solving the controlled Duffing oscillator. The method can be extended to nonlinear calculus of variations and optimal control problems. The method is based upon compactly supported linear semi-orthogonal B-spline wavelets. The differential and integral expressions which arise in the system dynamics, the performance index and the boundary conditions are converted into some algebraic equations which can be solved for the unknown coefficients. Illustrative examples are included to demonstrate the validity and applicability of the technique.

  4. Higher-order numerical methods derived from three-point polynomial interpolation

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and Hermitian finite-difference discretization. The equations generally apply for both uniform and variable meshes. Hybrid schemes resulting from different polynomial approximations for first and second derivatives lead to the nonuniform mesh extension of the so-called compact or Pade difference techniques. A variety of fourth-order methods are described and this concept is extended to sixth-order. Solutions with these procedures are presented for the similar and non-similar boundary layer equations with and without mass transfer, the Burgers equation, and the incompressible viscous flow in a driven cavity. Finally, the interpolation procedure is used to derive higher-order temporal integration schemes and results are shown for the diffusion equation.

  5. Cubic spline approximation techniques for parameter estimation in distributed systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.; Kunisch, K.

    1983-01-01

    Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.

  6. Radial Splines Would Prevent Rotation Of Bearing Race

    NASA Technical Reports Server (NTRS)

    Kaplan, Ronald M.; Chokshi, Jaisukhlal V.

    1993-01-01

    Interlocking fine-pitch ribs and grooves formed on otherwise flat mating end faces of housing and outer race of rolling-element bearing to be mounted in housing, according to proposal. Splines bear large torque loads and impose minimal distortion on raceway.

  7. Stable discontinuous grid implementation for collocated-grid finite-difference seismic wave modelling

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenguo; Zhang, Wei; Li, Hong; Chen, Xiaofei

    2013-03-01

    Simulating seismic waves with uniform grid in heterogeneous high-velocity contrast media requires small-grid spacing determined by the global minimal velocity, which leads to huge number of grid points and small time step. To reduce the computational cost, discontinuous grids that use a finer grid at the shallow low-velocity region and a coarser grid at high-velocity regions are needed. In this paper, we present a discontinuous grid implementation for the collocated-grid finite-difference (FD) methods to increase the efficiency of seismic wave modelling. The grid spacing ratio n could be an arbitrary integer n ≥ 2. To downsample the wavefield from the finer grid to the coarser grid, our implementation can simply take the values on the finer grid without employing a downsampling filter for grid spacing ratio n = 2 to achieve stable results for long-time simulation. For grid spacing ratio n ≥ 3, the Gaussian filter should be used as the downsampling filter to get a stable simulation. To interpolate the wavefield from the coarse grid to the finer grid, the trilinear interpolation is used. Combining the efficiency of discontinuous grid with the flexibility of collocated-grid FD method on curvilinear grids, our method can simulate large-scale high-frequency strong ground motion of real earthquake with consideration of surface topography.

  8. Evaluation of the spline reconstruction technique for PET

    SciTech Connect

    Kastis, George A. Kyriakopoulou, Dimitra; Gaitanis, Anastasios; Fernández, Yolanda; Hutton, Brian F.; Fokas, Athanasios S.

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real

  9. A Corpus-Based Study of the Linguistic Features and Processes Which Influence the Way Collocations Are Formed: Some Implications for the Learning of Collocations

    ERIC Educational Resources Information Center

    Walker, Crayton Phillip

    2011-01-01

    In this article I examine the collocational behaviour of groups of semantically related verbs (e.g., "head, run, manage") and nouns (e.g., "issue, factor, aspect") from the domain of business English. The results of this corpus-based study show that much of the collocational behaviour exhibited by these lexical items can be explained by examining…

  10. B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms

    SciTech Connect

    Bueno, G.; Ruiz, M.; Sanchez, S

    2006-10-04

    Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.

  11. A counterexample concerning the L_2 -projector onto linear spline spaces

    NASA Astrophysics Data System (ADS)

    Oswald, Peter

    2008-03-01

    For the L_2 -orthogonal projection P_V onto spaces of linear splines over simplicial partitions in polyhedral domains in mathbb{R}^d , d>1 , we show that in contrast to the one-dimensional case, where Vert P_VVert _{L_inftyto L_infty} le 3 independently of the nature of the partition, in higher dimensions the L_infty -norm of P_V cannot be bounded uniformly with respect to the partition. This fact is folklore among specialists in finite element methods and approximation theory but seemingly has never been formally proved.

  12. B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms

    NASA Astrophysics Data System (ADS)

    Bueno, G.; Sánchez, S.; Ruiz, M.

    2006-10-01

    Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.

  13. BSR: B-spline atomic R-matrix codes

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg

    2006-02-01

    BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput

  14. Composite multi-modal vibration control for a stiffened plate using non-collocated acceleration sensor and piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Li, Shengquan; Li, Juan; Mo, Yueping; Zhao, Rong

    2014-01-01

    A novel active method for multi-mode vibration control of an all-clamped stiffened plate (ACSP) is proposed in this paper, using the extended-state-observer (ESO) approach based on non-collocated acceleration sensors and piezoelectric actuators. Considering the estimated capacity of ESO for system state variables, output superposition and control coupling of other modes, external excitation, and model uncertainties simultaneously, a composite control method, i.e., the ESO based vibration control scheme, is employed to ensure the lumped disturbances and uncertainty rejection of the closed-loop system. The phenomenon of phase hysteresis and time delay, caused by non-collocated sensor/actuator pairs, degrades the performance of the control system, even inducing instability. To solve this problem, a simple proportional differential (PD) controller and acceleration feed-forward with an output predictor design produce the control law for each vibration mode. The modal frequencies, phase hysteresis loops and phase lag values due to non-collocated placement of the acceleration sensor and piezoelectric patch actuator are experimentally obtained, and the phase lag is compensated by using the Smith Predictor technology. In order to improve the vibration control performance, the chaos optimization method based on logistic mapping is employed to auto-tune the parameters of the feedback channel. The experimental control system for the ACSP is tested using the dSPACE real-time simulation platform. Experimental results demonstrate that the proposed composite active control algorithm is an effective approach for suppressing multi-modal vibrations.

  15. A spline approach to trial wave functions for variational and diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bressanini, Dario; Fabbri, Giordano; Mella, Massimo; Morosi, Gabriele

    1999-10-01

    We describe how to combine the variational Monte Carlo method with a spline description of the wave function to obtain a powerful and flexible method to optimize electronic and nuclear wave functions. A property of this method is that the optimization is performed "locally": During the optimization, the attention is focused on a region of the wave function at a certain time, with little or no perturbation in far away regions. This allows a fine tuning of the wave function even in cases where there is no experience on how to choose a good functional form and a good basis set. After the optimization, the splines were fitted using more familiar analytical global functions. The flexibility of the method is shown by calculating the electronic wave function for some two and three electron systems, and the nuclear wave function for the helium trimer. For 4He3, using a two-body helium-helium potential, we obtained the best variational function to date, which allows us to estimate the exact energy with a very small variance by a diffusion Monte Carlo simulation.

  16. Low speed wind tunnel investigation of span load alteration, forward-located spoilers, and splines as trailing-vortex-hazard alleviation devices on a transport aircraft model

    NASA Technical Reports Server (NTRS)

    Croom, D. R.; Dunham, R. E., Jr.

    1975-01-01

    The effectiveness of a forward-located spoiler, a spline, and span load alteration due to a flap configuration change as trailing-vortex-hazard alleviation methods was investigated. For the transport aircraft model in the normal approach configuration, the results indicate that either a forward-located spoiler or a spline is effective in reducing the trailing-vortex hazard. The results also indicate that large changes in span loading, due to retraction of the outboard flap, may be an effective method of reducing the trailing-vortex hazard.

  17. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.

    PubMed

    Kainu, Annette; Timonen, Kirsi

    2016-07-01

    Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.

  18. Fast Simulation of X-ray Projections of Spline-based Surfaces using an Append Buffer

    PubMed Central

    Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca

    2012-01-01

    Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector, and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640×480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically. Source code is available at http://conrad.stanford.edu/ PMID:22975431

  19. Control theory and splines, applied to signature storage

    NASA Technical Reports Server (NTRS)

    Enqvist, Per

    1994-01-01

    In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.

  20. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    NASA Astrophysics Data System (ADS)

    Man, Jun; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a sufficiently large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos expansion (PCE) to represent and propagate the uncertainties in parameters and states. However, PCKF suffers from the so-called "curse of dimensionality". Its computational cost increases drastically with the increasing number of parameters and system nonlinearity. Furthermore, PCKF may fail to provide accurate estimations due to the joint updating scheme for strongly nonlinear models. Motivated by recent developments in uncertainty quantification and EnKF, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected at each assimilation step; the "restart" scheme is utilized to eliminate the inconsistency between updated model parameters and states variables. The performance of RAPCKF is systematically tested with numerical cases of unsaturated flow models. It is shown that the adaptive approach and restart scheme can significantly improve the performance of PCKF. Moreover, RAPCKF has been demonstrated to be more efficient than EnKF with the same computational cost.

  1. Defining window-boundaries for genomic analyses using smoothing spline techniques

    SciTech Connect

    Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia

    2015-04-17

    High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the data and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.

  2. Defining window-boundaries for genomic analyses using smoothing spline techniques

    DOE PAGESBeta

    Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia

    2015-04-17

    High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore » and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less

  3. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match

  4. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  5. Explicit B-spline regularization in diffeomorphic image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools.

  6. Spline Driven: High Accuracy Projectors for Tomographic Reconstruction From Few Projections.

    PubMed

    Momey, Fabien; Denis, Loïc; Burnier, Catherine; Thiébaut, Éric; Becker, Jean-Marie; Desbat, Laurent

    2015-12-01

    Tomographic iterative reconstruction methods need a very thorough modeling of data. This point becomes critical when the number of available projections is limited. At the core of this issue is the projector design, i.e., the numerical model relating the representation of the object of interest to the projections on the detector. Voxel driven and ray driven projection models are widely used for their short execution time in spite of their coarse approximations. Distance driven model has an improved accuracy but makes strong approximations to project voxel basis functions. Cubic voxel basis functions are anisotropic, accurately modeling their projection is, therefore, computationally expensive. Both smoother and more isotropic basis functions better represent the continuous functions and provide simpler projectors. These considerations have led to the development of spherically symmetric volume elements, called blobs. Set apart their isotropy, blobs are often considered too computationally expensive in practice. In this paper, we consider using separable B-splines as basis functions to represent the object, and we propose to approximate the projection of these basis functions by a 2D separable model. When the degree of the B-splines increases, their isotropy improves and projections can be computed regardless of their orientation. The degree and the sampling of the B-splines can be chosen according to a tradeoff between approximation quality and computational complexity. We quantitatively measure the good accuracy of our model and compare it with other projectors, such as the distance-driven and the model proposed by Long et al. From the numerical experiments, we demonstrate that our projector with an improved accuracy better preserves the quality of the reconstruction as the number of projections decreases. Our projector with cubic B-splines requires about twice as many operations as a model based on voxel basis functions. Higher accuracy projectors can be used to

  7. Minimal multi-element stochastic collocation for uncertainty quantification of discontinuous functions

    SciTech Connect

    Jakeman, John D.; Narayan, Akil; Xiu, Dongbin

    2013-06-01

    We propose a multi-element stochastic collocation method that can be applied in high-dimensional parameter space for functions with discontinuities lying along manifolds of general geometries. The key feature of the method is that the parameter space is decomposed into multiple elements defined by the discontinuities and thus only the minimal number of elements are utilized. On each of the resulting elements the function is smooth and can be approximated using high-order methods with fast convergence properties. The decomposition strategy is in direct contrast to the traditional multi-element approaches which define the sub-domains by repeated splitting of the axes in the parameter space. Such methods are more prone to the curse-of-dimensionality because of the fast growth of the number of elements caused by the axis based splitting. The present method is a two-step approach. Firstly a discontinuity detector is used to partition parameter space into disjoint elements in each of which the function is smooth. The detector uses an efficient combination of the high-order polynomial annihilation technique along with adaptive sparse grids, and this allows resolution of general discontinuities with a smaller number of points when the discontinuity manifold is low-dimensional. After partitioning, an adaptive technique based on the least orthogonal interpolant is used to construct a generalized Polynomial Chaos surrogate on each element. The adaptive technique reuses all information from the partitioning and is variance-suppressing. We present numerous numerical examples that illustrate the accuracy, efficiency, and generality of the method. When compared against standard locally-adaptive sparse grid methods, the present method uses many fewer number of collocation samples and is more accurate.

  8. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  9. Flight tests of vortex-attenuating splines

    NASA Technical Reports Server (NTRS)

    Patterson, J. C., Jr.

    1974-01-01

    Visual data on formation and motion of lift-induced wingtip vortex were obtained by stationary, airflow visualization method. Visual data indicated that vortex cannot be eliminated by merely reshaping wingtip. Configuration change will likely have only small effect on far-field flow.

  10. Adaptation of a cubic smoothing spline algortihm for multi-channel data stitching at the National Ignition Facility

    SciTech Connect

    Brown, C; Adcock, A; Azevedo, S; Liebman, J; Bond, E

    2010-12-28

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.

  11. Railroad inspection based on ACFM employing a non-uniform B-spline approach

    NASA Astrophysics Data System (ADS)

    Chacón Muñoz, J. M.; García Márquez, F. P.; Papaelias, M.

    2013-11-01

    The stresses sustained by rails have increased in recent years due to the use of higher train speeds and heavier axle loads. For this reason surface and near-surface defects generate by Rolling Contact Fatigue (RCF) have become particularly significant as they can cause unexpected structural failure of the rail, resulting in severe derailments. The accident that took place in Hatfield, UK (2000), is an example of a derailment caused by the structural failure of a rail section due to RCF. Early detection of RCF rail defects is therefore of paramount importance to the rail industry. The performance of existing ultrasonic and magnetic flux leakage techniques in detecting rail surface-breaking defects, such as head checks and gauge corner cracking, is inadequate during high-speed inspection, while eddy current sensors suffer from lift-off effects. The results obtained through rail inspection experiments under simulated conditions using Alternating Current Field Measurement (ACFM) probes, suggest that this technique can be applied for the accurate and reliable detection of surface-breaking defects at high inspection speeds. This paper presents the B-Spline approach used for the accurate filtering the noise of the raw ACFM signal obtained during high speed tests to improve the reliability of the measurements. A non-uniform B-spline approximation is employed to calculate the exact positions and the dimensions of the defects. This method generates a smooth approximation similar to the ACFM dataset points related to the rail surface-breaking defect.

  12. Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.

    2015-12-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.

  13. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    SciTech Connect

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  14. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    ERIC Educational Resources Information Center

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  15. Cross-Linguistic Influence: Its Impact on L2 English Collocation Production

    ERIC Educational Resources Information Center

    Phoocharoensil, Supakorn

    2013-01-01

    This research study investigated the influence of learners' mother tongue on their acquisition of English collocations. Having drawn the linguistic data from two groups of Thai EFL learners differing in English proficiency level, the researcher found that the native language (L1) plays a significant role in the participants' collocation learning…

  16. Towards a Learner Need-Oriented Second Language Collocation Writing Assistant

    ERIC Educational Resources Information Center

    Ramos, Margarita Alonso; Carlini, Roberto; Codina-Filbà, Joan; Orol, Ana; Vincze, Orsolya; Wanner, Leo

    2015-01-01

    The importance of collocations, i.e. idiosyncratic binary word co-occurrences in the context of second language learning has been repeatedly emphasized by scholars working in the field. Some went even so far as to argue that "vocabulary learning is collocation learning" (Hausmann, 1984, p. 395). Empirical studies confirm this…

  17. Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations

    ERIC Educational Resources Information Center

    Liu, Dilin

    2010-01-01

    Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…

  18. Study on the Causes and Countermeasures of the Lexical Collocation Mistakes in College English

    ERIC Educational Resources Information Center

    Yan, Hansheng

    2010-01-01

    The lexical collocation in English is an important content in the linguistics theory, and also a research topic which is more and more emphasized in English teaching practice of China. The collocation ability of English decides whether learners could masterly use real English in effective communication. In many years' English teaching practice,…

  19. Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing

    ERIC Educational Resources Information Center

    Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh

    2015-01-01

    Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of…

  20. Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2011-01-01

    This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the…

  1. Collocation, Semantic Prosody, and Near Synonymy: A Cross-Linguistic Perspective

    ERIC Educational Resources Information Center

    Xiao, Richard; McEnery, Tony

    2006-01-01

    This paper explores the collocational behaviour and semantic prosody of near synonyms from a cross-linguistic perspective. The importance of these concepts to language learning is well recognized. Yet while collocation and semantic prosody have recently attracted much interest from researchers studying the English language, there has been little…

  2. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…

  3. Accurate recovery of 4D left ventricular deformations using volumetric B-splines incorporating phase based displacement estimates

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.

    2006-03-01

    In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.

  4. A Logarithmic Complexity Floating Frame of Reference Formulation with Interpolating Splines for Articulated Multi-Flexible-Body Dynamics

    PubMed Central

    Ahn, W.; Anderson, K.S.; De, S.

    2013-01-01

    An interpolating spline-based approach is presented for modeling multi-flexible-body systems in the divide-and-conquer (DCA) scheme. This algorithm uses the floating frame of reference formulation and piecewise spline functions to construct and solve the non-linear equations of motion of the multi-flexible-body system undergoing large rotations and translations. The new approach is compared with the flexible DCA (FDCA) that uses the assumed modes method [1]. The FDCA, in many cases, must resort to sub-structuring to accurately model the deformation of the system. We demonstrate, through numerical examples, that the interpolating spline-based approach is comparable in accuracy and superior in efficiency to the FDCA. The present approach is appropriate for modeling flexible mechanisms with thin 1D bodies undergoing large rotations and translations, including those with irregular shapes. As such, the present approach extends the current capability of the DCA to model deformable systems. The algorithm retains the theoretical logarithmic complexity inherent in the DCA when implemented in parallel. PMID:24124265

  5. Use of tensor product splines in magnet optimization

    SciTech Connect

    Davey, K.R. )

    1999-05-01

    Variational Metrics and other direct search techniques have proved useful in magnetic optimization. At least one technique used in magnetic optimization is to first fit the data of the desired optimization parameter to the data. If this fit is smoothly differentiable, a number of powerful techniques become available for the optimization. The author shows the usefulness of tensor product splines in accomplishing this end. Proper choice of augmented knot placement not only makes the fit very accurate, but allows for differentiation. Thus the gradients required with direct optimization in divariate and trivariate applications are robustly generated.

  6. A stable interface element scheme for the p-adaptive lifting collocation penalty formulation

    NASA Astrophysics Data System (ADS)

    Cagnone, J. S.; Nadarajah, S. K.

    2012-02-01

    This paper presents a procedure for adaptive polynomial refinement in the context of the lifting collocation penalty (LCP) formulation. The LCP scheme is a high-order unstructured discretization method unifying the discontinuous Galerkin, spectral volume, and spectral difference schemes in single differential formulation. Due to the differential nature of the scheme, the treatment of inter-cell fluxes for spatially varying polynomial approximations is not straightforward. Specially designed elements are proposed to tackle non-conforming polynomial approximations. These elements are constructed such that a conforming interface between polynomial approximations of different degrees is recovered. The stability and conservation properties of the scheme are analyzed and various inviscid compressible flow calculations are performed to demonstrate the potential of the proposed approach.

  7. Estimating error cross-correlations in soil moisture data sets using extended collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Crow, W. T.; Zwieback, S.; Dorigo, W. A.; Wagner, W.

    2016-02-01

    Global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multisource soil moisture retrievals into a unified data set. However, this requires an appropriate parameterization of the error structures of the underlying data sets. While triple collocation (TC) analysis has been widely recognized as a powerful tool for estimating random error variances of coarse-resolution soil moisture data sets, the estimation of error cross covariances remains an unresolved challenge. Here we propose a method—referred to as extended collocation (EC) analysis—for estimating error cross-correlations by generalizing the TC method to an arbitrary number of data sets and relaxing the therein made assumption of zero error cross-correlation for certain data set combinations. A synthetic experiment shows that EC analysis is able to reliably recover true error cross-correlation levels. Applied to real soil moisture retrievals from Advanced Microwave Scanning Radiometer-EOS (AMSR-E) C-band and X-band observations together with advanced scatterometer (ASCAT) retrievals, modeled data from Global Land Data Assimilation System (GLDAS)-Noah and in situ measurements drawn from the International Soil Moisture Network, EC yields reasonable and strong nonzero error cross-correlations between the two AMSR-E products. Against expectation, nonzero error cross-correlations are also found between ASCAT and AMSR-E. We conclude that the proposed EC method represents an important step toward a fully parameterized error covariance matrix for coarse-resolution soil moisture data sets, which is vital for any rigorous data assimilation framework or data merging scheme.

  8. Full-turn symplectic map from a generator in a Fourier-spline basis

    SciTech Connect

    Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.

    1993-04-01

    Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton`s method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10{sup 7} turns on an IBM RS6000 model 320 workstation, in a run of about one day.

  9. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  10. Local denoising of digital speckle pattern interferometry fringes by multiplicative correlation and weighted smoothing splines.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2005-05-10

    We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.

  11. Systolic algorithms for B-spline patch generation

    SciTech Connect

    Megson, G.M. )

    1991-03-01

    This paper describes a systolic array for constructing the blending functions of B-spline curves and surfaces to be 7k times faster than the equivalent sequential computation. The array requires just 5k inner product cell equivalents, where k - 1 is the maximum degree of the blending function polynomials. This array is then used as a basis for a composite systolic architecture for generating single or multiple points on a B-spline curve or surface. The total hardware requirement is bounded by 5 max (k, l) + 3 (max(m,n) + 1) inner product cells and O(mn) registers, where m and n are the numbers of control points in the two available directions. The hardware can be reduced to 5 max(k, l) + max(m,n) + 1 if each component of a point is generated by separate passes of data through the array. Equations for the array speed-up are given and likely speed-ups for different sized patches considered.

  12. Visualizing 3D Turbulence On Temporally Adaptive Wavelet Collocation Grids

    NASA Astrophysics Data System (ADS)

    Goldstein, D. E.; Kadlec, B. J.; Yuen, D. A.; Erlebacher, G.

    2005-12-01

    Today there is an explosion in data from high-resolution computations of nonlinear phenomena in many fields, including the geo- and environmental sciences. The efficient storage and subsequent visualization of these large data sets is a trade off in storage costs versus data quality. New dynamically adaptive simulation methodologies promise significant computational cost savings and have the added benefit of producing results on adapted grids that significantly reduce storage and data manipulation costs. Yet, with these adaptive simulation methodologies come new challenges in the visualization of temporally adaptive data sets. In this work turbulence data sets from Stochastic Coherent Adaptive Large Eddy Simulations (SCALES) are visualized with the open source tool ParaView, as a challenging case study. SCALES simulations use a temporally adaptive collocation grid defined by wavelet threshold filtering to resolve the most energetic coherent structures in a turbulence field. A subgrid scale model is used to account for the effect of unresolved subgrid scale modes. The results from the SCALES simulations are saved on a thresholded dyadic wavelet collocation grid, which by its nature does not include cell information. Paraview is an open source visualization package developed by KitWare(tm) that is based on the widely used VTK graphics toolkit. The efficient generation of cell information, required with current ParaView data formats, is explored using custom algorithms and VTK toolkit routines. Adaptive 3d visualizations using isosurfaces and volume visualizations are compared with non-adaptive visualizations. To explore the localized multiscale structures in the turbulent data sets the wavelet coefficients are also visualized allowing visualization of energy contained in local physical regions as well as in local wave number space.

  13. The approximation of the parameters of ASE motion by the spline-functions.

    NASA Astrophysics Data System (ADS)

    Tamarov, V. A.; Serebrennikov, A. G.

    1984-11-01

    Cubical and rational splines have been applied for the approximation of the rectangular coordinates and velocities of the Earth artificial satellites. The results were discussed from the point of view of operating speed and the amount of numerical information determining spline.

  14. The determination of gravity anomalies from geoid heights using the inverse Stokes' formula, Fourier transforms, and least squares collocation

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Sjoeberg, L.; Rapp, R. H.

    1978-01-01

    A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.

  15. Existence and Construction of Simple B-Splines of Class Ck on a Four-Directional Mesh of the Plane

    NASA Astrophysics Data System (ADS)

    Nouisser, O.; Sbibih, D.

    2001-08-01

    In this paper we present a study of spaces of splines in Ck(R2) with supports the square S1 and the lozenge ?1 formed respectively by four and eight triangles of the uniform four directional mesh of the plane. Such splines are called S1 and ?1-splines. We first compute the dimension of the space of S1-splines. Then we prove the existence of a unique S1-spline of minimal degree for any fixed kD0. By using this last result, we also prove the existence of a unique S1-spline of minimal degree. Finally, we describe algorithms allowing to compute the Bernstein-Bézier coefficients of S1-spline and ?1-spline of minimal degree.

  16. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical

  17. Interpolation by new B-splines on a four directional mesh of the plane

    NASA Astrophysics Data System (ADS)

    Nouisser, O.; Sbibih, D.

    2004-01-01

    In this paper we construct new simple and composed B-splines on the uniform four directional mesh of the plane, in order to improve the approximation order of B-splines studied in Sablonniere (in: Program on Spline Functions and the Theory of Wavelets, Proceedings and Lecture Notes, Vol. 17, University of Montreal, 1998, pp. 67-78). If φ is such a simple B-spline, we first determine the space of polynomials with maximal total degree included in , and we prove some results concerning the linear independence of the family . Next, we show that the cardinal interpolation with φ is correct and we study in S(φ) a Lagrange interpolation problem. Finally, we define composed B-splines by repeated convolution of φ with the characteristic functions of a square or a lozenge, and we give some of their properties.

  18. Generation of global VTEC maps from low latency GNSS observations based on B-spline modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte

    2015-04-01

    In May 2014 DGFI-TUM (the former DGFI) and the German Space Situational Awareness Centre (GSSAC) started to develop an OPerational Tool for Ionospheric Mapping And Prediction (OPTIMAP); since November 2014 the Institute of Astrophysics at the University of Göttingen (IAG) joined the group as the third partner. This project aims on the computation and prediction of maps of the vertical total electron content (VTEC) and the electron density distribution of the ionosphere on a global scale from both various space-geodetic observation techniques such as GNSS and satellite altimetry as well as Sun observations. In this contribution we present first results, i.e. a near-real time processing framework for generating VTEC maps by assimilating GNSS (GPS, GLONASS) based ionospheric data into a two-dimensional global B-spline approach. To be more specific, the spatial variations of VTEC are modelled by trigonometric B-spline functions in longitude and by endpoint-interpolating polynomial B-spline functions in latitude, respectively. Since B-spline functions are compactly supported and highly localizing our approach can handle large data gaps appropriately and, thus, provides a better approximation of data with heterogeneous density and quality compared to the commonly used spherical harmonics. The presented method models temporal variations of VTEC inside a Kalman filter. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. To approximate the temporal variation of these state vector components as part of the filter the dynamical model has to be set up. The current implementation of the filter allows to select between a random walk process, a Gauss-Markov process and a dynamic process driven by an empirical ionosphere model, e.g. the International Reference Ionosphere (IRI). For running the model ionospheric input data is acquired from terrestrial GNSS networks through online archive systems

  19. Using a radial ultrasound probe's virtual origin to compute midsagittal smoothing splines in polar coordinates.

    PubMed

    Heyne, Matthias; Derrick, Donald

    2015-12-01

    Tongue surface measurements from midsagittal ultrasound scans are effectively arcs with deviations representing tongue shape, but smoothing-spline analysis of variances (SSANOVAs) assume variance around a horizontal line. Therefore, calculating SSANOVA average curves of tongue traces in Cartesian Coordinates [Davidson, J. Acoust. Soc. Am. 120(1), 407-415 (2006)] creates errors that are compounded at tongue tip and root where average tongue shape deviates most from a horizontal line. This paper introduces a method for transforming data into polar coordinates similar to the technique by Mielke [J. Acoust. Soc. Am. 137(5), 2858-2869 (2015)], but using the virtual origin of a radial ultrasound transducer as the polar origin-allowing data conversion in a manner that is robust against between-subject and between-session variability.

  20. Spline analysis of Holocene sediment magnetic records: Uncertainty estimates for field modeling

    NASA Astrophysics Data System (ADS)

    Panovska, S.; Finlay, C. C.; Donadini, F.; Hirt, A. M.

    2012-02-01

    Sediment and archeomagnetic data spanning the Holocene enable us to reconstruct the evolution of the geomagnetic field on time scales of centuries to millennia. In global field modeling the reliability of data is taken into account by weighting according to uncertainty estimates. Uncertainties in sediment magnetic records arise from (1) imperfections in the paleomagnetic recording processes, (2) coring and (sub) sampling methods, (3) adopted averaging procedures, and (4) uncertainties in the age-depth models. We take a step toward improved uncertainty estimates by performing a comprehensive statistical analysis of the available global database of Holocene magnetic records. Smoothing spline models that capture the robust aspects of individual records are derived. This involves a cross-validation approach, based on an absolute deviation measure of misfit, to determine the smoothing parameter for each spline model, together with the use of a minimum smoothing time derived from the sedimentation rate and assumed lock-in depth. Departures from the spline models provide information concerning the random variability in each record. Temporal resolution analysis reveals that 50% of the records have smoothing times between 80 and 250 years. We also perform comparisons among the sediment magnetic records and archeomagnetic data, as well as with predictions from the global historical and archeomagnetic field models. Combining these approaches, we arrive at individual uncertainty estimates for each sediment record. These range from 2.5° to 11.2° (median: 5.9°; interquartile range: 5.4° to 7.2°) for inclination, 4.1° to 46.9° (median: 13.4°; interquartile range: 11.4° to 18.9°) for relative declination, and 0.59 to 1.32 (median: 0.93; interquartile range: 0.86 to 1.01) for standardized relative paleointensity. These values suggest that uncertainties may have been underestimated in previous studies. No compelling evidence for systematic inclination shallowing is

  1. MUlti-Dimensional Spline-Based Estimator (MUSE) for motion estimation: algorithm development and initial results.

    PubMed

    Viola, Francesco; Coe, Ryan L; Owen, Kevin; Guenther, Drake A; Walker, William F

    2008-12-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 x 10(-4) samples in range and 2.2 x 10(-3) samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 x 10(-3) samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is

  2. Collocated comparisons of continuous and filter-based PM2.5 measurements at Fort McMurray, Alberta, Canada

    PubMed Central

    Hsu, Yu-Mei; Wang, Xiaoliang; Chow, Judith C.; Watson, John G.; Percy, Kevin E.

    2016-01-01

    ABSTRACT Collocated comparisons for three PM2.5 monitors were conducted from June 2011 to May 2013 at an air monitoring station in the residential area of Fort McMurray, Alberta, Canada, a city located in the Athabasca Oil Sands Region. Extremely cold winters (down to approximately −40°C) coupled with low PM2.5 concentrations present a challenge for continuous measurements. Both the tapered element oscillating microbalance (TEOM), operated at 40°C (i.e., TEOM40), and Synchronized Hybrid Ambient Real-time Particulate (SHARP, a Federal Equivalent Method [FEM]), were compared with a Partisol PM2.5 U.S. Federal Reference Method (FRM) sampler. While hourly TEOM40 PM2.5 were consistently ~20–50% lower than that of SHARP, no statistically significant differences were found between the 24-hr averages for FRM and SHARP. Orthogonal regression (OR) equations derived from FRM and TEOM40 were used to adjust the TEOM40 (i.e., TEOMadj) and improve its agreement with FRM, particularly for the cold season. The 12-year-long hourly TEOMadj measurements from 1999 to 2011 based on the OR equations between SHARP and TEOM40 were derived from the 2-year (2011–2013) collocated measurements. The trend analysis combining both TEOMadj and SHARP measurements showed a statistically significant decrease in PM2.5 concentrations with a seasonal slope of −0.15 μg m−3 yr−1 from 1999 to 2014.Implications: Consistency in PM2.5 measurements are needed for trend analysis. Collocated comparison among the three PM2.5 monitors demonstrated the difference between FRM and TEOM, as well as between SHARP and TEOM. The orthogonal regressions equations can be applied to correct historical TEOM data to examine long-term trends within the network. PMID:26727574

  3. Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Haghiabi, Amir Hamzeh

    2016-07-01

    In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computing technique for predicting longitudinal dispersion coefficient ( D L ) in rivers. As mentioned in the literature, experimental dataset related to D L was collected and used for preparing MARS model. Results of MARS model were compared with multi-layer neural network model and empirical formulas. To define the most effective parameters on D L , the Gamma test was used. Performance of MARS model was assessed by calculation of standard error indices. Error indices showed that MARS model has suitable performance and is more accurate compared to multi-layer neural network model and empirical formulas. Results of the Gamma test and MARS model showed that flow depth ( H) and ratio of the mean velocity to shear velocity ( u/ u ∗) were the most effective parameters on the D L .

  4. COLLINARUS: collection of image-derived non-linear attributes for registration using splines

    NASA Astrophysics Data System (ADS)

    Chappelow, Jonathan; Bloch, B. Nicolas; Rofsky, Neil; Genega, Elizabeth; Lenkinski, Robert; DeWolf, William; Viswanath, Satish; Madabhushi, Anant

    2009-02-01

    We present a new method for fully automatic non-rigid registration of multimodal imagery, including structural and functional data, that utilizes multiple texutral feature images to drive an automated spline based non-linear image registration procedure. Multimodal image registration is significantly more complicated than registration of images from the same modality or protocol on account of difficulty in quantifying similarity between different structural and functional information, and also due to possible physical deformations resulting from the data acquisition process. The COFEMI technique for feature ensemble selection and combination has been previously demonstrated to improve rigid registration performance over intensity-based MI for images of dissimilar modalities with visible intensity artifacts. Hence, we present here the natural extension of feature ensembles for driving automated non-rigid image registration in our new technique termed Collection of Image-derived Non-linear Attributes for Registration Using Splines (COLLINARUS). Qualitative and quantitative evaluation of the COLLINARUS scheme is performed on several sets of real multimodal prostate images and synthetic multiprotocol brain images. Multimodal (histology and MRI) prostate image registration is performed for 6 clinical data sets comprising a total of 21 groups of in vivo structural (T2-w) MRI, functional dynamic contrast enhanced (DCE) MRI, and ex vivo WMH images with cancer present. Our method determines a non-linear transformation to align WMH with the high resolution in vivo T2-w MRI, followed by mapping of the histopathologic cancer extent onto the T2-w MRI. The cancer extent is then mapped from T2-w MRI onto DCE-MRI using the combined non-rigid and affine transformations determined by the registration. Evaluation of prostate registration is performed by comparison with the 3 time point (3TP) representation of functional DCE data, which provides an independent estimate of cancer

  5. A Quadratic Spline based Interface (QUASI) reconstruction algorithm for accurate tracking of two-phase flows

    NASA Astrophysics Data System (ADS)

    Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.

    2009-12-01

    A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.

  6. Using Spline Functions for the Shape Description of the Surface of Shell Structures

    NASA Astrophysics Data System (ADS)

    Lenda, Grzegorz

    2014-12-01

    The assessment of the cover shape of shell structures makes an important issue both from the point of view of safety, as well as functionality of the construction. The most numerous group among this type of constructions are objects having the shape of a quadric (cooling towers, tanks with gas and liquids, radio-telescope dishes etc.). The material from observation of these objects (point sets), collected during periodic measurements is usually converted into a continuous form in the process of approximation, with the use of the quadric surface. The created models, are then applied in the assessment of the deformation of surface in the given period of time. Such a procedure has, however, some significant limitations. The approximation with the use of quadrics, allows the determination of basic dimensions and location of the construction, however it results in ideal objects, not providing any information on local surface deformations. They can only be defined by comparison of the model with the point set of observations. If the periodic measurements are carried out in independent, separate points, then it will be impossible to define the existing deformations directly. The second problem results from the one-equation character of the ideal approximation model. Real deformations of the object change its basic parameters, inter alia the lengths of half-axis of main quadrics. The third problem appears when the construction is not a quadric; no information on the equation describing its shape is available either. Accepting wrong kind of approximation function, causes the creation of a model of large deviations from the observed points. All the mentioned above inconveniences can be avoided by applying splines to the shape description of the surface of shell structures. The use of the function of this type, however, comes across other types of limitations. This study deals with the above subject, presenting several methods allowing the increase of accuracy and decrease of

  7. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  8. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGESBeta

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  9. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  10. Variability analysis of device-level photonics using stochastic collocation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xing, Yufei; Spina, Domenico; Li, Ang; Dhaene, Tom; Bogaerts, Wim

    2016-05-01

    Abstract Integrated photonics, and especially silicon photonics, has been rapidly expanded its catalog of building blocks and functionalities. Now, it is maturing fast towards circuit-level integration to serve more complex applications in industry. However, performance variability due to the fabrication process and operational conditions can limit the yield of large-scale circuits. It is essential to assess this impact at the design level with an efficient variability analysis: how variations in geometrical, electrical and optical parameters propagate into components performance. In particular when implementing wavelength-selective filters, many primary functional parameters are affected by fabrication-induced variability. The key functional parameters that we assess in this paper are the waveguide propagation constant (the effective index, essential to define the exact length of a delay line) and the coupling coefficients in coupling structure (necessary to set the power distribution over different delay lines). The Monte Carlo (MC) method is the standard method for variability analysis, thanks to its accuracy and easy implementation. However, due to its slow convergence, it requires a large set of samples (simulations or measurements), making it computationally or experimentally expensive. More efficient methods to assess such variability can be used, such as generalized polynomial chaos (gPC) expansion or stochastic collocation. In this paper, we demonstrate stochastic collocation (SC) as an efficient alternative to MC or gPC to characterize photonic devices under the effect of uncertainty. The idea of SC is to interpolate stochastic solutions in the random space by interpolation polynomials. After sampling the deterministic problem at a pre-defined set of nodes in random space, the interpolation is constructed. SC drastically reduces computation and measurement cost. Also, like MC method, sampling-based SC is easy to implement. Its computation cost can be

  11. Variance-based global sensitivity analysis for multiple scenarios and models with implementation using sparse grid collocation

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Ye, Ming

    2015-09-01

    Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.

  12. Experimental procedure for the evaluation of tooth stiffness in spline coupling including angular misalignment

    NASA Astrophysics Data System (ADS)

    Curà, Francesca; Mura, Andrea

    2013-11-01

    Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.

  13. Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation

    NASA Astrophysics Data System (ADS)

    de Haan, Siebren

    2016-08-01

    Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.

  14. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    NASA Astrophysics Data System (ADS)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  15. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  16. The Chebyshev-Legendre method: Implementing Legendre methods on Chebyshev points

    NASA Technical Reports Server (NTRS)

    Don, Wai Sun; Gottlieb, David

    1993-01-01

    We present a new collocation method for the numerical solution of partial differential equations. This method uses the Chebyshev collocation points, but because of the way the boundary conditions are implemented, it has all the advantages of the Legendre methods. In particular, L2 estimates can be obtained easily for hyperbolic and parabolic problems.

  17. On the spectral methods for population balance problems

    NASA Astrophysics Data System (ADS)

    Solsvik, J.; Jakobsen, H. A.

    2013-10-01

    Numerical methods in the weighted residual framework have been evaluated for three population balance (PB) problems. Based on the present solution results, the orthogonal collocation method is recommended above the tau and least-squares methods. The orthogonal collocation method holds the simplest algebra theory, obtains high accuracy, and is more computational efficient compared to the other methods evaluated.

  18. Reverse engineering of complex biological body parts by squared distance enabled non-uniform rational B-spline technique and layered manufacturing.

    PubMed

    Pandithevan, Ponnusamy

    2015-02-01

    In tissue engineering, the successful modeling of scaffold for the replacement of damaged body parts depends mainly on external geometry and internal architecture in order to avoid the adverse effects such as pain and lack of ability to transfer the load to the surrounding bone. Due to flexibility in controlling the parameters, layered manufacturing processes are widely used for the fabrication of bone tissue engineering scaffold with the given computer-aided design model. This article presents a squared distance minimization approach for weight optimization of non-uniform rational B-spline curve and surface to modify the geometry that exactly fits into the defect region automatically and thus to fabricate the scaffold specific to subject and site. The study showed that though the errors associated in the B-spline curve and surface were minimized by squared distance method than point distance method and tangent distance method, the errors could be minimized further in the rational B-spline curve and surface as the optimal weight could change the shape that desired for the defect site. In order to measure the efficacy of the present approach, the results were compared with point distance method and tangent distance method in optimizing the non-rational and rational B-spline curve and surface fitting for the defect site. The optimized geometry then allowed to construct the scaffold in fused deposition modeling system as an example. The result revealed that the squared distance-based weight optimization of the rational curve and surface in making the defect specific geometry best fits into the defect region than the other methods used.

  19. Evaluation of Direct Collocation Optimal Control Problem Formulations for Solving the Muscle Redundancy Problem.

    PubMed

    De Groote, Friedl; Kinney, Allison L; Rao, Anil V; Fregly, Benjamin J

    2016-10-01

    Estimation of muscle forces during motion involves solving an indeterminate problem (more unknown muscle forces than joint moment constraints), frequently via optimization methods. When the dynamics of muscle activation and contraction are modeled for consistency with muscle physiology, the resulting optimization problem is dynamic and challenging to solve. This study sought to identify a robust and computationally efficient formulation for solving these dynamic optimization problems using direct collocation optimal control methods. Four problem formulations were investigated for walking based on both a two and three dimensional model. Formulations differed in the use of either an explicit or implicit representation of contraction dynamics with either muscle length or tendon force as a state variable. The implicit representations introduced additional controls defined as the time derivatives of the states, allowing the nonlinear equations describing contraction dynamics to be imposed as algebraic path constraints, simplifying their evaluation. Problem formulation affected computational speed and robustness to the initial guess. The formulation that used explicit contraction dynamics with muscle length as a state failed to converge in most cases. In contrast, the two formulations that used implicit contraction dynamics converged to an optimal solution in all cases for all initial guesses, with tendon force as a state generally being the fastest. Future work should focus on comparing the present approach to other approaches for computing muscle forces. The present approach lacks some of the major limitations of established methods such as static optimization and computed muscle control while remaining computationally efficient.

  20. Optimal aeroassisted orbital transfer with plane change using collocation and nonlinear programming

    NASA Technical Reports Server (NTRS)

    Shi, Yun. Y.; Nelson, R. L.; Young, D. H.

    1990-01-01

    The fuel optimal control problem arising in the non-planar orbital transfer employing aeroassisted technology is addressed. The mission involves the transfer from high energy orbit (HEO) to low energy orbit (LEO) with orbital plane change. The basic strategy here is to employ a combination of propulsive maneuvers in space and aerodynamic maneuvers in the atmosphere. The basic sequence of events for the aeroassisted HEO to LEO transfer consists of three phases. In the first phase, the orbital transfer begins with a deorbit impulse at HEO which injects the vehicle into an elliptic transfer orbit with perigee inside the atmosphere. In the second phase, the vehicle is optimally controlled by lift and bank angle modulations to perform the desired orbital plane change and to satisfy heating constraints. Because of the energy loss during the turn, an impulse is required to initiate the third phase to boost the vehicle back to the desired LEO orbital altitude. The third impulse is then used to circularize the orbit at LEO. The problem is solved by a direct optimization technique which uses piecewise polynomial representation for the state and control variables and collocation to satisfy the differential equations. This technique converts the optimal control problem into a nonlinear programming problem which is solved numerically. Solutions were obtained for cases with and without heat constraints and for cases of different orbital inclination changes. The method appears to be more powerful and robust than other optimization methods. In addition, the method can handle complex dynamical constraints.

  1. Quiet Clean Short-haul Experimental Engine (QCSEE). Ball spline pitch change mechanism design report

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Detailed design parameters are presented for a variable-pitch change mechanism. The mechanism is a mechanical system containing a ball screw/spline driving two counteracting master bevel gears meshing pinion gears attached to each of 18 fan blades.

  2. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.

  3. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  4. A Spline Approximating Algorithm for the Rezoning (remapping)of Arbitrary Meshes

    NASA Astrophysics Data System (ADS)

    Wang, Ruili

    2001-06-01

    Traditionally, numerical simulation fluid dynamics has taken the form of Lagrangian or Eulerian methods. Lagrangian methods, in which the computational mesh travels with the fluid, are ideal for the many problems which involve interfaces between materials or free surfaces. However, multidimensional Lagrangian calculations can typically be carried out for only a limited time before severs mesh distortion, or even mesh tangling, destroys the calculation. Eulerian methods, in which the mesh is fixed, are ideal for flows with large deformation but the sharp resolution of interfaces or free surfaces is lost. The any method in computational fluid dynamics requires the periodic remapping of conserved quantities such as mass, momentum, and energy from one old, distorted mesh to some other arbitrarily defined mesh. This procedure is a type of interpolation which is usually constrained to be conservative and monotone. The report presents an types of remapping algorithms using spline approximating methods for numerical simulation codes using a unstructured or adaptive mesh. The approach adapted to not only structure mesh but also unstructure mesh. It is effective that the techniques can the more accurate ensure cell physics quantity distribution, that the approach is simple and nothing the matter gives the procedures.

  5. Parallel iterative solution of the Hermite Collocation equations on GPUs II

    NASA Astrophysics Data System (ADS)

    Vilanakis, N.; Mathioudakis, E.

    2014-03-01

    Hermite Collocation is a high order finite element method for Boundary Value Problems modelling applications in several fields of science and engineering. Application of this integration free numerical solver for the solution of linear BVPs results in a large and sparse general system of algebraic equations, suggesting the usage of an efficient iterative solver especially for realistic simulations. In part I of this work an efficient parallel algorithm of the Schur complement method coupled with Bi-Conjugate Gradient Stabilized (BiCGSTAB) iterative solver has been designed for multicore computing architectures with a Graphics Processing Unit (GPU). In the present work the proposed algorithm has been extended for high performance computing environments consisting of multiprocessor machines with multiple GPUs. Since this is a distributed GPU and shared CPU memory parallel architecture, a hybrid memory treatment is needed for the development of the parallel algorithm. The realization of the algorithm took place on a multiprocessor machine HP SL390 with Tesla M2070 GPUs using the OpenMP and OpenACC standards. Execution time measurements reveal the efficiency of the parallel implementation.

  6. Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.

    PubMed

    Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei

    2014-02-01

    Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods. PMID:26270906

  7. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-11-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.

  8. Four-dimensional B-spline-based motion analysis of tagged cardiac MR images

    NASA Astrophysics Data System (ADS)

    Ozturk, Cengizhan; McVeigh, Elliot R.

    1999-05-01

    In recent years, with development of new MRI techniques, noninvasive evaluation of global and regional cardiac function is becoming a reality. One of the methods used for this purpose is MRI tagging. In tagging, spatially encoded magnetic saturation planes, tags, are created within tissues. These act as temporary markers and move with the tissue. In cardiac tagging, tag deformation pattern provides useful qualitative and quantitative information about the functional properties of underlying myocardium. The measured deformation of a single tag plane contains only unidirectional information of the past motion. In order to track the motion of a cardiac material point, this sparse, single dimensional data has to be combined with similar information gathered from other tag sets and all time frames. Previously, several methods have been developed which rely on the specific geometry of the chambers. Here, we employ an image plane based, simple cartesian coordinate system and provide a stepwise method to describe the heart motion using a four-dimensional tensor product of B-splines. The proposed displacement and forward motion fields exhibited sub-pixel accuracy. Since our motion fields are parametric and based on an image plane based coordinate system, trajectories or other derived values (velocity, acceleration, strains...) can be calculated for any desired point on the MRI images. This method is sufficiently general so that the motion of any tagged structure can be tracked.

  9. Miniaturized Multi-Band Antenna via Element Collocation

    SciTech Connect

    Martin, R P

    2012-06-01

    The resonant frequency of a microstrip patch antenna may be reduced through the addition of slots in the radiating element. Expanding upon this concept in favor of a significant reduction in the tuned width of the radiator, nearly 60% of the antenna metallization is removed, as seen in the top view of the antenna’s radiating element (shown in red, below, left). To facilitate an increase in the gain of the antenna, the radiator is suspended over the ground plane (green) by an air substrate at a height of 0.250" while being mechanically supported by 0.030" thick Rogers RO4003 laminate in the same profile as the element. Although the entire surface of the antenna (red) provides 2.45 GHz operation with insignificant negative effects on performance after material removal, the smaller square microstrip in the middle must be isolated from the additional aperture in order to afford higher frequency operation. A low insertion loss path centered at 2.45 GHz may simultaneously provide considerable attenuation at additional frequencies through the implementation of a series-parallel, resonant reactive path. However, an inductive reactance alone will not permit lower frequency energy to propagate across the intended discontinuity. To mitigate this, a capacitance is introduced in series with the inductor, generating a resonance at 2.45 GHz with minimum forward transmission loss. Four of these reactive pairs are placed between the coplanar elements as shown. Therefore, the aperture of the lower-frequency outer segment includes the smaller radiator while the higher frequency section is isolated from the additional material. In order to avoid cross-polarization losses due to the orientation of a transmitter or receiver in reference to the antenna, circular polarization is realized by a quadrature coupler for each collocated antenna as seen in the bottom view of the antenna (right). To generate electromagnetic radiation concentrically rotating about the direction of propagation

  10. Uncertainty Quantification via Random Domain Decomposition and Probabilistic Collocation on Sparse Grids

    SciTech Connect

    Lin, Guang; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2010-09-01

    Due to lack of knowledge or insufficient data, many physical systems are subject to uncertainty. Such uncertainty occurs on a multiplicity of scales. In this study, we conduct the uncertainty analysis of diffusion in random composites with two dominant scales of uncertainty: Large-scale uncertainty in the spatial arrangement of materials and small-scale uncertainty in the parameters within each material. A general two-scale framework that combines random domain decomposition (RDD) and probabilistic collocation method (PCM) on sparse grids to quantify the large and small scales of uncertainty, respectively. Using sparse grid points instead of standard grids based on full tensor products for both the large and small scales of uncertainty can greatly reduce the overall computational cost, especially for random process with small correlation length (large number of random dimensions). For one-dimensional random contact point problem and random inclusion problem, analytical solution and Monte Carlo simulations have been conducted respectively to verify the accuracy of the combined RDD-PCM approach. Additionally, we employed our combined RDD-PCM approach to two- and three-dimensional examples to demonstrate that our combined RDD-PCM approach provides efficient, robust and nonintrusive approximations for the statistics of diffusion in random composites.

  11. A scalable framework for the solution of stochastic inverse problems using a sparse grid collocation approach

    SciTech Connect

    Zabaras, N. Ganapathysubramanian, B.

    2008-04-20

    Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information. A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.

  12. A scalable framework for the solution of stochastic inverse problems using a sparse grid collocation approach

    NASA Astrophysics Data System (ADS)

    Zabaras, N.; Ganapathysubramanian, B.

    2008-04-01

    Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information. A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.

  13. A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation

    NASA Astrophysics Data System (ADS)

    Rieser, Daniel; Mayer-Guerr, Torsten

    2014-05-01

    The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.

  14. Vibration suppression in cutting tools using collocated piezoelectric sensors/actuators with an adaptive control algorithm

    SciTech Connect

    Radecki, Peter P; Farinholt, Kevin M; Park, Gyuhae; Bement, Matthew T

    2008-01-01

    The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.

  15. Surface quality monitoring for process control by on-line vibration analysis using an adaptive spline wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Luo, G. Y.; Osypiw, D.; Irle, M.

    2003-05-01

    The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.

  16. On the Stability of Collocated Controllers in the Presence or Uncertain Nonlinearities and Other Perils

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1985-01-01

    Robustness properties are investigated for two types of controllers for large flexible space structures, which use collocated sensors and actuators. The first type is an attitude controller which uses negative definite feedback of measured attitude and rate, while the second type is a damping enhancement controller which uses only velocity (rate) feedback. It is proved that collocated attitude controllers preserve closed loop global asymptotic stability when linear actuator/sensor dynamics satisfying certain phase conditions are present, or monotonic increasing nonlinearities are present. For velocity feedback controllers, the global asymptotic stability is proved under much weaker conditions. In particular, they have 90 phase margin and can tolerate nonlinearities belonging to the (0,infinity) sector in the actuator/sensor characteristics. The results significantly enhance the viability of both types of collocated controllers, especially when the available information about the large space structure (LSS) parameters is inadequate or inaccurate.

  17. Merging quantum-chemistry with B-splines to describe molecular photoionization

    NASA Astrophysics Data System (ADS)

    Argenti, L.; Marante, C.; Klinker, M.; Corral, I.; Gonzalez, J.; Martin, F.

    2016-05-01

    Theoretical description of observables in attosecond pump-probe experiments requires a good representation of the system's ionization continuum. For polyelectronic atoms and molecules, however, this is still a challenge, due to the complicated short-range structure of correlated electronic wavefunctions. Whereas quantum chemistry packages (QCP) implementing sophisticated methods to compute bound electronic molecular states are well established, comparable tools for the continuum are not widely available yet. To tackle this problem, we have developed a new approach that, by means of a hybrid Gaussian-B-spline basis, interfaces existing QCPs with close-coupling scattering methods. To illustrate the viability of this approach, we report results for the multichannel ionization of the helium atom and of the hydrogen molecule that are in excellent agreement with existing accurate benchmarks. These findings, together with the flexibility of QCPs, make of this approach a good candidate for the theoretical study of the ionization of poly-electronic systems. FP7/ERC Grant XCHEM 290853.

  18. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    SciTech Connect

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-02-02

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data.

  19. Evaluating techniques for multivariate classification of non-collocated spatial data.

    SciTech Connect

    McKenna, Sean Andrew

    2004-09-01

    Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach

  20. On the anomaly of velocity-pressure decoupling in collocated mesh solutions

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook; Vanoverbeke, Thomas

    1991-01-01

    The use of various pressure correction algorithms originally developed for fully staggered meshes can yield a velocity-pressure decoupled solution for collocated meshes. The mechanism that causes velocity-pressure decoupling is identified. It is shown that the use of a partial differential equation for the incremental pressure eliminates such a mechanism and yields a velocity-pressure coupled solution. Example flows considered are a three dimensional lid-driven cavity flow and a laminar flow through a 90 deg bend square duct. Numerical results obtained using the collocated mesh are in good agreement with the measured data and other numerical results.

  1. Temporal gravity field modeling based on least square collocation with short-arc approach

    NASA Astrophysics Data System (ADS)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  2. Formulaic Language and Collocations in German Essays: From Corpus-Driven Data to Corpus-Based Materials

    ERIC Educational Resources Information Center

    Krummes, Cedric; Ensslin, Astrid

    2015-01-01

    Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern…

  3. An Automatic Collocation Writing Assistant for Taiwanese EFL Learners: A Case of Corpus-Based NLP Technology

    ERIC Educational Resources Information Center

    Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin

    2008-01-01

    Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…

  4. The Challenge of English Language Collocation Learning in an ES/FL Environment: PRC Students in Singapore

    ERIC Educational Resources Information Center

    Ying, Yang

    2015-01-01

    This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…

  5. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    SciTech Connect

    Guo, Y.; Keller, J.; Errichello, R.; Halse, C.

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  6. CT segmentation of dental shapes by anatomy-driven reformation imaging and B-spline modelling.

    PubMed

    Barone, S; Paoli, A; Razionale, A V

    2016-06-01

    Dedicated imaging methods are among the most important tools of modern computer-aided medical applications. In the last few years, cone beam computed tomography (CBCT) has gained popularity in digital dentistry for 3D imaging of jawbones and teeth. However, the anatomy of a maxillofacial region complicates the assessment of tooth geometry and anatomical location when using standard orthogonal views of the CT data set. In particular, a tooth is defined by a sub-region, which cannot be easily separated from surrounding tissues by only considering pixel grey-intensity values. For this reason, an image enhancement is usually necessary in order to properly segment tooth geometries. In this paper, an anatomy-driven methodology to reconstruct individual 3D tooth anatomies by processing CBCT data is presented. The main concept is to generate a small set of multi-planar reformation images along significant views for each target tooth, driven by the individual anatomical geometry of a specific patient. The reformation images greatly enhance the clearness of the target tooth contours. A set of meaningful 2D tooth contours is extracted and used to automatically model the overall 3D tooth shape through a B-spline representation. The effectiveness of the methodology has been verified by comparing some anatomy-driven reconstructions of anterior and premolar teeth with those obtained by using standard tooth segmentation tools. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26418417

  7. Linear mixed-effect multivariate adaptive regression splines applied to nonlinear pharmacokinetics data.

    PubMed

    Gries, J M; Verotta, D

    2000-08-01

    In a frequently performed pharmacokinetics study, different subjects are given different doses of a drug. After each dose is given, drug concentrations are observed according to the same sampling design. The goal of the experiment is to obtain a representation for the pharmacokinetics of the drug, and to determine if drug concentrations observed at different times after a dose are linear in respect to dose. The goal of this paper is to obtain a representation for concentration as a function of time and dose, which (a) makes no assumptions on the underlying pharmacokinetics of the drug; (b) takes into account the repeated measure structure of the data; and (c) detects nonlinearities in respect to dose. To address (a) we use a multivariate adaptive regression splines representation (MARS), which we recast into a linear mixed-effects model, addressing (b). To detect nonlinearity we describe a general algorithm that obtains nested (mixed-effect) MARS representations. In the pharmacokinetics application, the algorithm obtains representations containing time, and time and dose, respectively, with the property that the bases functions of the first representation are a subset of the second. Standard statistical model selection criteria are used to select representations linear or nonlinear in respect to dose. The method can be applied to a variety of pharmacokinetics (and pharmacodynamic) preclinical and phase I-III trials. Examples of applications of the methodology to real and simulated data are reported.

  8. Spline-based image-to-volume registration for three-dimensional electron microscopy.

    PubMed

    Jonić, S; Sorzano, C O S; Thévenaz, P; El-Bez, C; De Carlo, S; Unser, M

    2005-07-01

    This paper presents an algorithm based on a continuous framework for a posteriori angular and translational assignment in three-dimensional electron microscopy (3DEM) of single particles. Our algorithm can be used advantageously to refine the assignment of standard quantized-parameter methods by registering the images to a reference 3D particle model. We achieve the registration by employing a gradient-based iterative minimization of a least-squares measure of dissimilarity between an image and a projection of the volume in the Fourier transform (FT) domain. We compute the FT of the projection using the central-slice theorem (CST). To compute the gradient accurately, we take advantage of a cubic B-spline model of the data in the frequency domain. To improve the robustness of the algorithm, we weight the cost function in the FT domain and apply a "mixed" strategy for the assignment based on the minimum value of the cost function at registration for several different initializations. We validate our algorithm in a fully controlled simulation environment. We show that the mixed strategy improves the assignment accuracy; on our data, the quality of the angular and translational assignment was better than 2 voxel (i.e., 6.54 angstroms). We also test the performance of our algorithm on real EM data. We conclude that our algorithm outperforms a standard projection-matching refinement in terms of both consistency of 3D reconstructions and speed. PMID:15885434

  9. Towards a More General Type of Univariate Constrained Interpolation with Fractal Splines

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Viswanathan, P.; Reddy, K. M.

    2015-09-01

    Recently, in [Electron. Trans. Numer. Anal. 41 (2014) 420-442] authors introduced a new class of rational cubic fractal interpolation functions with linear denominators via fractal perturbation of traditional nonrecursive rational cubic splines and investigated their basic shape preserving properties. The main goal of the current paper is to embark on univariate constrained fractal interpolation that is more general than what was considered so far. To this end, we propose some strategies for selecting the parameters of the rational fractal spline so that the interpolating curves lie strictly above or below a prescribed linear or a quadratic spline function. Approximation property of the proposed rational cubic fractal spine is broached by using the Peano kernel theorem as an interlude. The paper also provides an illustration of background theory, veined by examples.

  10. Modeling of complex-valued wiener systems using B-spline neural network.

    PubMed

    Hong, Xia; Chen, Sheng

    2011-05-01

    In this brief, a new complex-valued B-spline neural network is introduced in order to model the complex-valued Wiener system using observational input/output data. The complex-valued nonlinear static function in the Wiener system is represented using the tensor product from two univariate B-spline neural networks, using the real and imaginary parts of the system input. Following the use of a simple least squares parameter initialization scheme, the Gauss-Newton algorithm is applied for the parameter estimation, which incorporates the De Boor algorithm, including both the B-spline curve and the first-order derivatives recursion. Numerical examples, including a nonlinear high-power amplifier model in communication systems, are used to demonstrate the efficacy of the proposed approaches. PMID:21550875

  11. Data-worth analysis through probabilistic collocation-based Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Dai, Cheng; Xue, Liang; Zhang, Dongxiao; Guadagnini, Alberto

    2016-09-01

    We propose a new and computationally efficient data-worth analysis and quantification framework keyed to the characterization of target state variables in groundwater systems. We focus on dynamically evolving plumes of dissolved chemicals migrating in randomly heterogeneous aquifers. An accurate prediction of the detailed features of solute plumes requires collecting a substantial amount of data. Otherwise, constraints dictated by the availability of financial resources and ease of access to the aquifer system suggest the importance of assessing the expected value of data before these are actually collected. Data-worth analysis is targeted to the quantification of the impact of new potential measurements on the expected reduction of predictive uncertainty based on a given process model. Integration of the Ensemble Kalman Filter method within a data-worth analysis framework enables us to assess data worth sequentially, which is a key desirable feature for monitoring scheme design in a contaminant transport scenario. However, it is remarkably challenging because of the (typically) high computational cost involved, considering that repeated solutions of the inverse problem are required. As a computationally efficient scheme, we embed in the data-worth analysis framework a modified version of the Probabilistic Collocation Method-based Ensemble Kalman Filter proposed by Zeng et al. (2011) so that we take advantage of the ability to assimilate data sequentially in time through a surrogate model constructed via the polynomial chaos expansion. We illustrate our approach on a set of synthetic scenarios involving solute migrating in a two-dimensional random permeability field. Our results demonstrate the computational efficiency of our approach and its ability to quantify the impact of the design of the monitoring network on the reduction of uncertainty associated with the characterization of a migrating contaminant plume.

  12. Pattern recognition and lithological interpretation of collocated seismic and magnetotelluric models using self-organizing maps

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Muñoz, G.; Moeck, I.

    2012-05-01

    Joint interpretation of models from seismic tomography and inversion of magnetotelluric (MT) data is an efficient approach to determine the lithology of the subsurface. Statistical methods are well established but were developed for only two types of models so far (seismic P velocity and electrical resistivity). We apply self-organizing maps (SOMs), which have no limitations in the number of parameters considered in the joint interpretation. Our SOM method includes (1) generation of data vectors from the seismic and MT images, (2) unsupervised learning, (3) definition of classes by algorithmic segmentation of the SOM using image processing techniques and (4) application of learned knowledge to classify all data vectors and assign a lithological interpretation for each data vector. We apply the workflow to collocated P velocity, vertical P-velocity gradient and resistivity models derived along a 40 km profile around the geothermal site Groß Schönebeck in the Northeast German Basin. The resulting lithological model consists of eight classes covering Cenozoic, Mesozoic and Palaeozoic sediments down to 5 km depth. There is a remarkable agreement between the litho-type distribution from the SOM analysis and regional marker horizons interpolated from sparse 2-D industrial reflection seismic data. The most interesting features include (1) characteristic properties of the Jurassic (low P-velocity gradients, low resistivity values) interpreted as the signature of shales, and (2) a pattern within the Upper Permian Zechstein layer with low resistivity and increased P-velocity values within the salt depressions and increased resistivity and decreased P velocities in the salt pillows. The latter is explained in our interpretation by flow of less dense salt matrix components to form the pillows while denser and more brittle evaporites such as anhydrite remain in place during the salt mobilization.

  13. Material approximation of data smoothing and spline curves inspired by slime mould.

    PubMed

    Jones, Jeff; Adamatzky, Andrew

    2014-09-01

    The giant single-celled slime mould Physarum polycephalum is known to approximate a number of network problems via growth and adaptation of its protoplasmic transport network and can serve as an inspiration towards unconventional, material-based computation. In Physarum, predictable morphological adaptation is prevented by its adhesion to the underlying substrate. We investigate what possible computations could be achieved if these limitations were removed and the organism was free to completely adapt its morphology in response to changing stimuli. Using a particle model of Physarum displaying emergent morphological adaptation behaviour, we demonstrate how a minimal approach to collective material computation may be used to transform and summarise properties of spatially represented datasets. We find that the virtual material relaxes more strongly to high-frequency changes in data, which can be used for the smoothing (or filtering) of data by approximating moving average and low-pass filters in 1D datasets. The relaxation and minimisation properties of the model enable the spatial computation of B-spline curves (approximating splines) in 2D datasets. Both clamped and unclamped spline curves of open and closed shapes can be represented, and the degree of spline curvature corresponds to the relaxation time of the material. The material computation of spline curves also includes novel quasi-mechanical properties, including unwinding of the shape between control points and a preferential adhesion to longer, straighter paths. Interpolating splines could not directly be approximated due to the formation and evolution of Steiner points at narrow vertices, but were approximated after rectilinear pre-processing of the source data. This pre-processing was further simplified by transforming the original data to contain the material inside the polyline. These exemplary results expand the repertoire of spatially represented unconventional computing devices by demonstrating a

  14. Seismic porosity mapping in the Ekofisk Field using a new form of collocated cokriging

    SciTech Connect

    Doyen, P.M.; Boer, L.D. den; Pillet, W.R.

    1996-12-31

    An important practical problem in the geosciences is the integration of seismic attribute information in subsurface mapping applications. The aim is to utilize a more densely sampled secondary variable such as seismic impedance to guide the interpolation of a related primary variable such as porosity. The collocated cokriging technique was recently introduced to facilitate the integration process. Here we propose a simplified implementation of collocated cokriging based on a Bayesian updating rule. We demonstrate that the cokriging estimate at one point can be obtained by direct updating of the kriging estimate with the collocated secondary data. The linear update only requires knowledge of the kriging variance and the coefficient(s) of correlation between primary and secondary variables. No cokriging system need be solved and no reference to spatial cross-covariances is required. The new form of collocated cokriging is applied to predict the lateral variations of porosity in a reservoir layer of the Ekofisk Field, Norwegian North Sea. A cokriged porosity map is obtained by combining zone average porosity data at more than one hundred wells and acoustic impedance information extracted from a 3-D seismic survey. Utilization of the seismic information yields a more detailed and reliable image of the porosity distribution along the flanks of the producing structure.

  15. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  16. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  17. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  18. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  19. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  20. Your Participation Is "Greatly/Highly" Appreciated: Amplifier Collocations in L2 English

    ERIC Educational Resources Information Center

    Edmonds, Amanda; Gudmestad, Aarnes

    2014-01-01

    The current study sets out to investigate collocational knowledge for a set of 13 English amplifiers among native and nonnative speakers of English, by providing a partial replication of one of the projects reported on in Granger (1998). The project combines both phraseological and distributional approaches to research into formulaic language to…

  1. Frequent Collocates and Major Senses of Two Prepositions in ESL and ENL Corpora

    ERIC Educational Resources Information Center

    Nkemleke, Daniel

    2009-01-01

    This contribution assesses in quantitative terms frequent collocates and major senses of "between" and "through" in the corpus of Cameroonian English (CCE), the corpus of East-African (Kenya and Tanzania) English which is part of the International Corpus of English (ICE) project (ICE-EA), and the London Oslo/Bergen (LOB) corpus of British English.…

  2. Investigation of Native Speaker and Second Language Learner Intuition of Collocation Frequency

    ERIC Educational Resources Information Center

    Siyanova-Chanturia, Anna; Spina, Stefania

    2015-01-01

    Research into frequency intuition has focused primarily on native (L1) and, to a lesser degree, nonnative (L2) speaker intuitions about single word frequency. What remains a largely unexplored area is L1 and L2 intuitions about collocation (i.e., phrasal) frequency. To bridge this gap, the present study aimed to answer the following question: How…

  3. The Effect of Corpus-Based Activities on Verb-Noun Collocations in EFL Classes

    ERIC Educational Resources Information Center

    Ucar, Serpil; Yükselir, Ceyhun

    2015-01-01

    This current study sought to reveal the impacts of corpus-based activities on verb-noun collocation learning in EFL classes. This study was carried out on two groups--experimental and control groups- each of which consists of 15 students. The students were preparatory class students at School of Foreign Languages, Osmaniye Korkut Ata University.…

  4. Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuators and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.

  5. Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuator and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.

  6. Collocational Processing in Light of the Phraseological Continuum Model: Does Semantic Transparency Matter?

    ERIC Educational Resources Information Center

    Gyllstad, Henrik; Wolter, Brent

    2016-01-01

    The present study investigates whether two types of word combinations (free combinations and collocations) differ in terms of processing by testing Howarth's Continuum Model based on word combination typologies from a phraseological tradition. A visual semantic judgment task was administered to advanced Swedish learners of English (n = 27) and…

  7. The Role of Language for Thinking and Task Selection in EFL Learners' Oral Collocational Production

    ERIC Educational Resources Information Center

    Wang, Hung-Chun; Shih, Su-Chin

    2011-01-01

    This study investigated how English as a foreign language (EFL) learners' types of language for thinking and types of oral elicitation tasks influence their lexical collocational errors in speech. Data were collected from 42 English majors in Taiwan using two instruments: (1) 3 oral elicitation tasks and (2) an inner speech questionnaire. The…

  8. Strategies in Translating Collocations in Religious Texts from Arabic into English

    ERIC Educational Resources Information Center

    Dweik, Bader S.; Shakra, Mariam M. Abu

    2010-01-01

    The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…

  9. Verb-Noun Collocations in Second Language Writing: A Corpus Analysis of Learners' English

    ERIC Educational Resources Information Center

    Laufer, Batia; Waldman, Tina

    2011-01-01

    The present study investigates the use of English verb-noun collocations in the writing of native speakers of Hebrew at three proficiency levels. For this purpose, we compiled a learner corpus that consists of about 300,000 words of argumentative and descriptive essays. For comparison purposes, we selected LOCNESS, a corpus of young adult native…

  10. Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers

    ERIC Educational Resources Information Center

    Sadeghi, Karim

    2009-01-01

    Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at…

  11. Action Research: Applying a Bilingual Parallel Corpus Collocational Concordancer to Taiwanese Medical School EFL Academic Writing

    ERIC Educational Resources Information Center

    Reynolds, Barry Lee

    2016-01-01

    Lack of knowledge in the conventional usage of collocations in one's respective field of expertise cause Taiwanese students to produce academic writing that is markedly different than more competent writing. This is because Taiwanese students are first and foremost English as a Foreign language (EFL) readers and may have difficulties picking up on…

  12. Inference in dynamic systems using B-splines and quasilinearized ODE penalties.

    PubMed

    Frasso, Gianluca; Jaeger, Jonathan; Lambert, Philippe

    2016-05-01

    Nonlinear (systems of) ordinary differential equations (ODEs) are common tools in the analysis of complex one-dimensional dynamic systems. We propose a smoothing approach regularized by a quasilinearized ODE-based penalty. Within the quasilinearized spline-based framework, the estimation reduces to a conditionally linear problem for the optimization of the spline coefficients. Furthermore, standard ODE compliance parameter(s) selection criteria are applicable. We evaluate the performances of the proposed strategy through simulated and real data examples. Simulation studies suggest that the proposed procedure ensures more accurate estimates than standard nonlinear least squares approaches when the state (initial and/or boundary) conditions are not known. PMID:26602190

  13. Time Varying Compensator Design for Reconfigurable Structures Using Non-Collocated Feedback

    NASA Technical Reports Server (NTRS)

    Scott, Michael A.

    1996-01-01

    Analysis and synthesis tools are developed to improved the dynamic performance of reconfigurable nonminimum phase, nonstrictly positive real-time variant systems. A novel Spline Varying Optimal (SVO) controller is developed for the kinematic nonlinear system. There are several advantages to using the SVO controller, in which the spline function approximates the system model, observer, and controller gain. They are: The spline function approximation is simply connected, thus the SVO controller is more continuous than traditional gain scheduled controllers when implemented on a time varying plant; ft is easier for real-time implementations in storage and computational effort; where system identification is required, the spline function requires fewer experiments, namely four experiments; and initial startup estimator transients are eliminated. The SVO compensator was evaluated on a high fidelity simulation of the Shuttle Remote Manipulator System. The SVO controller demonstrated significant improvement over the present arm performance: (1) Damping level was improved by a factor of 3; and (2) Peak joint torque was reduced by a factor of 2 following Shuttle thruster firings.

  14. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms: validation against collocated MODIS and CALIOP data

    NASA Astrophysics Data System (ADS)

    Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip T.; Cronk, Heather Q.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert Y.; Fisher, Brenden; Osterman, Gregory B.; Pollock, Randy H.; Crisp, David; Eldering, Annmarie; Gunson, Michael R.

    2016-03-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols, i.e., contamination, within the instrument's field of view. Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 µm O2 A band, neglecting scattering by clouds and aerosols, which introduce photon path-length differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 µm (weak CO2 band) and 2.06 µm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which are sensitive to different features in the spectra, provides the basis for cloud screening of the OCO-2 data set.To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning of algorithmic threshold parameters that allows for processing of ≃ 20-25 % of all OCO-2 soundings

  15. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms; validation against collocated MODIS and CALIOP data

    NASA Astrophysics Data System (ADS)

    Taylor, T. E.; O'Dell, C. W.; Frankenberg, C.; Partain, P.; Cronk, H. Q.; Savtchenko, A.; Nelson, R. R.; Rosenthal, E. J.; Chang, A. Y.; Fisher, B.; Osterman, G.; Pollock, R. H.; Crisp, D.; Eldering, A.; Gunson, M. R.

    2015-12-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols within the instrument's field of view (FOV). Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 μm O2 A-band, neglecting scattering by clouds and aerosols, which introduce photon path-length (PPL) differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 μm (weak CO2 band) and 2.06 μm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which key off of different features in the spectra, provides the basis for cloud screening of the OCO-2 data set. To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning to allow throughputs of ≃ 30 %, agreement between the OCO-2 and MODIS cloud screening methods is found to be

  16. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    SciTech Connect

    Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.

    2012-04-15

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.

  17. Cubic smoothing splines background correction in on-line liquid chromatography-Fourier transform infrared spectrometry.

    PubMed

    Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel

    2010-10-22

    A background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry (LC-FTIR) is proposed. The developed approach applies univariate background correction to each variable (i.e. each wave number) individually. Spectra measured in the region before and after each peak cluster are used as knots to model the variation of the eluent absorption intensity with time using cubic smoothing splines (CSS) functions. The new approach has been successfully tested on simulated as well as on real data sets obtained from injections of standard mixtures of polyethylene glycols with four different molecular weights in methanol:water, 2-propanol:water and ethanol:water gradients ranging from 30 to 90, 10 to 25 and from 10 to 40% (v/v) of organic modifier, respectively. Calibration lines showed high linearity with coefficients of determination higher than 0.98 and limits of detection between 0.4 and 1.4, 0.9 and 1.8, and 1.1 and 2.7 mgmL⁻¹ in methanol:water, 2-propanol:water and ethanol:water, respectively. Furthermore the method performance has been compared with a univariate background correction approach based on the use of a reference spectra matrix (UBC-RSM) to discuss the potential as well as pitfalls and drawbacks of the proposed approach. This method works without previous variable selection and provides minimal user-interaction, thus increasing drastically the feasibility of on-line coupling of gradient LC-FTIR.

  18. 3D shape recovery of a newborn skull using thin-plate splines.

    PubMed

    Lapeer, R J; Prager, R W

    2000-01-01

    The objective of this paper is to construct a mesh-model of a newborn skull for finite element analysis to study its deformation when subjected to the forces present during labour. The current state of medical imaging technology has reached a level which allows accurate visualisation and shape recovery of biological organs and body-parts. However, a sufficiently large set of medical images cannot always be obtained, often because of practical or ethical reasons, and the requirement to recover the shape of the biological object of interest has to be met by other means. Such is the case for a newborn skull. A method to recover the three-dimensional (3D) shape from (minimum) two orthogonal atlas images of the object of interest and a homologous object is described. This method is based on matching landmarks and curves on the orthogonal images of the object of interest with corresponding landmarks and curves on the homologous or 'master'-object which is fully defined in 3D space. On the basis of this set of corresponding landmarks, a thin-plate spline function can be derived to warp from the 'master'-object space to the 'slave'-object space. This method is applied to recover the 3D shape of a newborn skull. Images from orthogonal view-planes are obtained from an atlas. The homologous object is an adult skull, obtained from CT-images made available by the Visible Human Project. After shape recovery, a mesh-model of the newborn skull is generated.

  19. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  20. [Spatio-Temporal Bioelectrical Brain Activity Organization during Reading Syntagmatic and Paradigmatic Collocations by Students with Different Foreign Language Proficiency].

    PubMed

    Sokolova, L V; Cherkasova, A S

    2015-01-01

    Texts or words/pseudowords are often used as stimuli for human verbal activity research. Our study pays attention to decoding processes of grammatical constructions consisted of two-three words--collocations. Russian and English collocation sets without any narrative were presented to Russian-speaking students with different English language skill. Stimulus material had two types of collocations: paradigmatic and syntagmatic. 30 students (average age--20.4 ± 0.22) took part in the study, they were divided into two equal groups depending on their English language skill (linguists/nonlinguists). During reading brain bioelectrical activity of cortex has been registered from 12 electrodes in alfa-, beta-, theta-bands. Coherent function reflecting cooperation of different cortical areas during reading collocations has been analyzed. Increase of interhemispheric and diagonal connections while reading collocations in different languages in the group of students with low knowledge of foreign language testifies of importance of functional cooperation between the hemispheres. It has been found out that brain bioelectrical activity of students with good foreign language knowledge during reading of all collocation types in Russian and English is characterized by economization of nervous substrate resources compared to nonlinguists. Selective activation of certain cortical areas has also been observed (depending on the grammatical construction type) in nonlinguists group that is probably related to special decoding system which processes presented stimuli. Reading Russian paradigmatic constructions by nonlinguists entailed increase between left cortical areas, reading of English syntagmatic collocations--between right ones.

  1. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  2. Explosion Source Location Study Using Collocated Acoustic and Seismic Networks in Israel

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Gitterman, Y.; Arrowsmith, S.; Ben-Horin, Y.

    2013-12-01

    We explore a joined analysis of seismic and infrasonic signals for improvement in automatic monitoring of small local/regional events, such as construction and quarry blasts, military chemical explosions, sonic booms, etc. using collocated seismic and infrasonic networks recently build in Israel (ISIN) in the frame of the project sponsored by the Bi-national USA-Israel Science Foundation (BSF). The general target is to create an automatic system, which will provide detection, location and identification of explosions in real-time or close-to-real time manner. At the moment the network comprises 15 stations hosting a microphone and seismometer (or accelerometer), operated by the Geophysical Institute of Israel (GII), plus two infrasonic arrays, operated by the National Data Center, Soreq: IOB in the South (Negev desert) and IMA in the North of Israel (Upper Galilee),collocated with the IMS seismic array MMAI. The study utilizes a ground-truth data-base of numerous Rotem phosphate quarry blasts, a number of controlled explosions for demolition of outdated ammunitions and experimental surface explosions for a structure protection research, at the Sayarim Military Range. A special event, comprising four military explosions in a neighboring country, that provided both strong seismic (up to 400 km) and infrasound waves (up to 300 km), is also analyzed. For all of these events the ground-truth coordinates and/or the results of seismic location by the Israel Seismic Network (ISN) have been provided. For automatic event detection and phase picking we tested the new recursive picker, based on Statistically optimal detector. The results were compared to the manual picks. Several location techniques have been tested using the ground-truth event recordings and the preliminary results obtained have been compared to the ground-truth locations: 1) a number of events have been located as intersection of azimuths estimated using the wide-band F-K analysis technique applied to the

  3. A B-spline image registration based CAD scheme to evaluate drug treatment response of ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Li, Zheng; Moore, Kathleen; Thai, Theresa; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    Ovarian cancer is the second most common cancer amongst gynecologic malignancies, and has the highest death rate. Since the majority of ovarian cancer patients (>75%) are diagnosed in the advanced stage with tumor metastasis, chemotherapy is often required after surgery to remove the primary ovarian tumors. In order to quickly assess patient response to the chemotherapy in the clinical trials, two sets of CT examinations are taken pre- and post-therapy (e.g., after 6 weeks). Treatment efficacy is then evaluated based on Response Evaluation Criteria in Solid Tumors (RECIST) guideline, whereby tumor size is measured by the longest diameter on one CT image slice and only a subset of selected tumors are tracked. However, this criterion cannot fully represent the volumetric changes of the tumors and might miss potentially problematic unmarked tumors. Thus, we developed a new CAD approach to measure and analyze volumetric tumor growth/shrinkage using a cubic B-spline deformable image registration method. In this initial study, on 14 sets of pre- and post-treatment CT scans, we registered the two consecutive scans using cubic B-spline registration in a multiresolution (from coarse to fine) framework. We used Mattes mutual information metric as the similarity criterion and the L-BFGS-B optimizer. The results show that our method can quantify volumetric changes in the tumors more accurately than RECIST, and also detect (highlight) potentially problematic regions that were not originally targeted by radiologists. Despite the encouraging results of this preliminary study, further validation of scheme performance is required using large and diverse datasets in future.

  4. Mass preserving nonrigid registration of CT lung images using cubic B-spline.

    PubMed

    Yin, Youbing; Hoffman, Eric A; Lin, Ching-Long

    2009-09-01

    The authors propose a nonrigid image registration approach to align two computed-tomography (CT)-derived lung datasets acquired during breath-holds at two inspiratory levels when the image distortion between the two volumes is large. The goal is to derive a three-dimensional warping function that can be used in association with computational fluid dynamics studies. In contrast to the sum of squared intensity difference (SSD), a new similarity criterion, the sum of squared tissue volume difference (SSTVD), is introduced to take into account changes in reconstructed Hounsfield units (scaled attenuation coefficient, HU) with inflation. This new criterion aims to minimize the local tissue volume difference within the lungs between matched regions, thus preserving the tissue mass of the lungs if the tissue density is assumed to be relatively constant. The local tissue volume difference is contributed by two factors: Change in the regional volume due to the deformation and change in the fractional tissue content in a region due to inflation. The change in the regional volume is calculated from the Jacobian value derived from the warping function and the change in the fractional tissue content is estimated from reconstructed HU based on quantitative CT measures. A composite of multilevel B-spline is adopted to deform images and a sufficient condition is imposed to ensure a one-to-one mapping even for a registration pair with large volume difference. Parameters of the transformation model are optimized by a limited-memory quasi-Newton minimization approach in a multiresolution framework. To evaluate the effectiveness of the new similarity measure, the authors performed registrations for six lung volume pairs. Over 100 annotated landmarks located at vessel bifurcations were generated using a semiautomatic system. The results show that the SSTVD method yields smaller average landmark errors than the SSD method across all six registration pairs.

  5. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    PubMed Central

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  6. The convergence problem of collocation solutions in the framework of the stochastic interpretation

    NASA Astrophysics Data System (ADS)

    Sansò, F.; Venuti, G.

    2011-01-01

    The problem of the convergence of the collocation solution to the true gravity field was defined long ago (Tscherning in Boll Geod Sci Affini 39:221-252, 1978) and some results were derived, in particular by Krarup (Boll Geod Sci Affini 40:225-240, 1981). The problem is taken up again in the context of the stochastic interpretation of collocation theory and some new results are derived, showing that, when the potential T can be really continued down to a Bjerhammar sphere, we have a quite general convergence property in the noiseless case. When noise is present in data, still reasonable convergence results hold true. "Democrito che 'l mondo a caso pone" "Democritus who made the world stochastic" Dante Alighieri, La Divina Commedia, Inferno, IV - 136

  7. Raman lidar profiling of atmospheric water vapor: Simultaneous measurements with two collocated systems

    NASA Technical Reports Server (NTRS)

    Goldsmith, J. E. M.; Bisson, Scott E.; Ferrare, Richard A.; Evans, Keith D.; Whiteman, David N.; Melfi, S. H.

    1994-01-01

    Raman lidar is a leading candidate for providing the detailed space- and time-resolved measurements of water vapor needed by a variety of atmospheric studies. Simultaneous measurements of atmospheric water vapor are described using two collocated Raman lidar systems. These lidar systems, developed at the NASA/Goddard Space Flight Center and Sandia National Laboratories, acquired approximately 12 hours of simultaneous water vapor data during three nights in November 1992 while the systems were collocated at the Goddard Space Flight Center. Although these lidar systems differ substantially in their design, measured water vapor profiles agreeed within 0.15 g/kg between altitudes of 1 and 5 km. Comparisons with coincident radiosondes showed all instruments agreed within 0.2 g/kg in this same altitude range. Both lidars also clearly showed the advection of water vapor in the middle troposphere and the pronounced increase in water vapor in the nocturnal boundary layer that occurred during one night.

  8. Minimum fuel coplanar aeroassisted orbital transfer using collocation and nonlinear programming

    NASA Technical Reports Server (NTRS)

    Shi, Yun Yuan; Young, D. H.

    1991-01-01

    The fuel optimal control problem arising in coplanar orbital transfer employing aeroassisted technology is addressed. The mission involves the transfer from high energy orbit (HEO) to low energy orbit (LEO) without plane change. The basic approach here is to employ a combination of propulsive maneuvers in space and aerodynamic maneuvers in the atmosphere. The basic sequence of events for the coplanar aeroassisted HEO to LEO orbit transfer consists of three phases. In the first phase, the transfer begins with a deorbit impulse at HEO which injects the vehicle into a elliptic transfer orbit with perigee inside the atmosphere. In the second phase, the vehicle is optimally controlled by lift and drag modulation to satisfy heating constraints and to exit the atmosphere with the desired flight path angle and velocity so that the apogee of the exit orbit is the altitude of the desired LEO. Finally, the second impulse is required to circularize the orbit at LEO. The performance index is maximum final mass. Simulation results show that the coplanar aerocapture is quite different from the case where orbital plane changes are made inside the atmosphere. In the latter case, the vehicle has to penetrate deeper into the atmosphere to perform the desired orbital plane change. For the coplanar case, the vehicle needs only to penetrate the atmosphere deep enough to reduce the exit velocity so the vehicle can be captured at the desired LEO. The peak heating rates are lower and the entry corridor is wider. From the thermal protection point of view, the coplanar transfer may be desirable. Parametric studies also show the maximum peak heating rates and the entry corridor width are functions of maximum lift coefficient. The problem is solved using a direct optimization technique which uses piecewise polynomial representation for the states and controls and collocation to represent the differential equations. This converts the optimal control problem into a nonlinear programming problem

  9. The precision of wet atmospheric deposition data from national atmospheric deposition program/national trends network sites determined with collocated samplers

    USGS Publications Warehouse

    Nilles, M.A.; Gordon, J.D.; Schroder, L.J.

    1994-01-01

    A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median

  10. Polyvinylidene fluoride film sensors in collocated feedback structural control: application for suppressing impact-induced disturbances.

    PubMed

    Ma, Chien-Ching; Chuang, Kuo-Chih; Pan, Shan-Ying

    2011-12-01

    Polyvinylidene fluoride (PVDF) films are light, flexible, and have high piezoelectricity. Because of these advantages, they have been widely used as sensors in applications such as underwater investigation, nondestructive damage detection, robotics, and active vibration suppression. PVDF sensors are especially preferred over conventional strain gauges in active vibration control because the PVDF sensors are easy to cut into different sizes or shapes as piezoelectric actuators and they can then be placed as collocated pairs. In this work, to focus on demonstrating the dynamic sensing performance of the PVDF film sensor, we revisit the active vibration control problem of a cantilever beam using a collocated lead zirconate titanate (PZT) actuator/PVDF film sensor pair. Before applying active vibration control, the measurement characteristics of the PVDF film sensor are studied by simultaneous comparison with a strain gauge. The loading effect of the piezoelectric actuator on the cantilever beam is also investigated in this paper. Finally, four simple, robust active vibration controllers are employed with the collocated PZT/PVDF pair to suppress vibration of the cantilever beam subjected to impact loadings. The four controllers are the velocity feedback controller, the integral resonant controller (IRC), the resonant controller, and the positive position feedback (PPF) controller. Suppression of impact disturbances is especially suitable for the purpose of demonstrating the dynamic sensing performance of the PVDF sensor. The experimental results also provide suggestions for choosing between the previously mentioned controllers, which have been proven to be effective in suppressing impact-induced vibrations.

  11. Spectral optical layer properties of cirrus from collocated airborne measurements - a feasibility study

    NASA Astrophysics Data System (ADS)

    Finger, F.; Werner, F.; Klingebiel, M.; Ehrlich, A.; Jäkel, E.; Voigt, M.; Borrmann, S.; Spichtinger, P.; Wendisch, M.

    2015-07-01

    Spectral optical layer properties of cirrus are derived from simultaneous and vertically collocated measurements of spectral upward and downward solar irradiance above and below the cloud layer and concurrent in situ microphysical sampling. From the irradiance data spectral transmissivity, absorptivity, reflectivity, and cloud top albedo of the observed cirrus layer are obtained. At the same time microphysical properties of the cirrus were sampled. The close collocation of the radiative and microphysical measurements, above, beneath and inside the cirrus, is obtained by using a research aircraft (Learjet 35A) in tandem with a towed platform called AIRTOSS (AIRcraft TOwed Sensor Shuttle). AIRTOSS can be released from and retracted back to the research aircraft by means of a cable up to a distance of 4 km. Data were collected in two field campaigns above the North and Baltic Sea in spring and late summer 2013. Exemplary results from one measuring flight are discussed also to illustrate the benefits of collocated sampling. Based on the measured cirrus microphysical properties, radiative transfer simulations were applied to quantify the impact of cloud particle properties such as crystal shape, effective radius reff, and optical thickness τ on cirrus optical layer properties. The effects of clouds beneath the cirrus are evaluated in addition. They cause differences in the layer properties of the cirrus by a factor of 2 to 3, and for cirrus radiative forcing by up to a factor of 4. If low-level clouds below cirrus are not considered the solar cooling due to the cirrus is significantly overestimated.

  12. Polyvinylidene fluoride film sensors in collocated feedback structural control: application for suppressing impact-induced disturbances.

    PubMed

    Ma, Chien-Ching; Chuang, Kuo-Chih; Pan, Shan-Ying

    2011-12-01

    Polyvinylidene fluoride (PVDF) films are light, flexible, and have high piezoelectricity. Because of these advantages, they have been widely used as sensors in applications such as underwater investigation, nondestructive damage detection, robotics, and active vibration suppression. PVDF sensors are especially preferred over conventional strain gauges in active vibration control because the PVDF sensors are easy to cut into different sizes or shapes as piezoelectric actuators and they can then be placed as collocated pairs. In this work, to focus on demonstrating the dynamic sensing performance of the PVDF film sensor, we revisit the active vibration control problem of a cantilever beam using a collocated lead zirconate titanate (PZT) actuator/PVDF film sensor pair. Before applying active vibration control, the measurement characteristics of the PVDF film sensor are studied by simultaneous comparison with a strain gauge. The loading effect of the piezoelectric actuator on the cantilever beam is also investigated in this paper. Finally, four simple, robust active vibration controllers are employed with the collocated PZT/PVDF pair to suppress vibration of the cantilever beam subjected to impact loadings. The four controllers are the velocity feedback controller, the integral resonant controller (IRC), the resonant controller, and the positive position feedback (PPF) controller. Suppression of impact disturbances is especially suitable for the purpose of demonstrating the dynamic sensing performance of the PVDF sensor. The experimental results also provide suggestions for choosing between the previously mentioned controllers, which have been proven to be effective in suppressing impact-induced vibrations. PMID:23443690

  13. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

    NASA Technical Reports Server (NTRS)

    Robbins, J. W.

    1985-01-01

    An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

  14. Convergence of spectral methods for hyperbolic initial-boundary value systems

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Lustman, L.; Tadmor, E.

    1986-01-01

    A convergence proof for spectral approximations is presented for hyperbolic systems with initial and boundary conditions. The Chebyshev collocation is treated in detail, but the final result is readily applicable to other spectral methods, such as Legendre collocation or tau-methods.

  15. Spectral methods for the Euler equations: Fourier methods and shock-capturing

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Salas, M. D.; Zang, T. A.

    1984-01-01

    Spectral methods for compressible flows are introduced in relation to finite difference and finite element techniques within the framework of the method of weighted residuals. Current spectral collocation methods are put in historical context. The basic concepts of Fourier spectral collocation methods are provided. Filtering strategies for shock-capturing approaches are also presented. Fourier shock capturing techniques are evaluated using a one dimensional, periodic astrophysical ""nozzle'' problem.

  16. Application of principal component analysis-multivariate adaptive regression splines for the simultaneous spectrofluorimetric determination of dialkyltins in micellar media

    NASA Astrophysics Data System (ADS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2013-11-01

    A new multicomponent analysis method, based on principal component analysis-multivariate adaptive regression splines (PC-MARS) is proposed for the determination of dialkyltin compounds. In Tween-20 micellar media, dimethyl and dibutyltin react with morin to give fluorescent complexes with the maximum emission peaks at 527 and 520 nm, respectively. The spectrofluorimetric matrix data, before building the MARS models, were subjected to principal component analysis and decomposed to PC scores as starting points for the MARS algorithm. The algorithm classifies the calibration data into several groups, in each a regression line or hyperplane is fitted. Performances of the proposed methods were tested in term of root mean square errors of prediction (RMSEP), using synthetic solutions. The results show the strong potential of PC-MARS, as a multivariate calibration method, to be applied to spectral data for multicomponent determinations. The effect of different experimental parameters on the performance of the method were studied and discussed. The prediction capability of the proposed method compared with GC-MS method for determination of dimethyltin and/or dibutyltin.

  17. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    SciTech Connect

    Tenderholt, A.; Hedman, B.; Hodgson, K.O.

    2007-01-08

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k{sup 3}-weighted EXAFS data.

  18. A meshless point collocation treatment of transient bioheat problems.

    PubMed

    Bourantas, G C; Loukopoulos, V C; Burganos, V N; Nikiforidis, G C

    2014-05-01

    A meshless numerical method is proposed for the solution of the transient bioheat equation in two and three dimensions. The Pennes bioheat equation is extended in order to incorporate water evaporation, tissue damage, and temperature-dependent tissue properties during tumor ablation. The conductivity of the tissue is not assumed constant but is treated as a local function to simulate local variability due to the existence of usually unclear interfacing of healthy and pathological segments. In this way, one avoids the need for accurate identification of the boundaries between pathological and healthy regions, which is a typical problem in medical practice, and sidesteps, evidently, the corresponding mathematical treatment of such boundaries, which is usually a tedious procedure with some inevitable degree of approximation. The numerical results of the new method for test applications of the bioheat transfer equation are validated against analytical predictions and predictions of other numerical methods. 3D simulations are presented that involve the modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. An evaluation of the effective medium approximation to homogenize conductivity fields for use with the bioheat equation is also provided.

  19. B-splines and Hermite-Padé approximants to the exponential function

    NASA Astrophysics Data System (ADS)

    Sablonnière, Paul

    2008-10-01

    This paper is the continuation of a work initiated in [P. Sablonnière, An algorithm for the computation of Hermite-Padé approximations to the exponential function: divided differences and Hermite-Padé forms. Numer. Algorithms 33 (2003) 443-452] about the computation of Hermite-Padé forms (HPF) and associated Hermite-Padé approximants (HPA) to the exponential function. We present an alternative algorithm for their computation, based on the representation of HPF in terms of integral remainders with B-splines as Peano kernels. Using the good properties of discrete B-splines, this algorithm gives rise to a great variety of representations of HPF of higher orders in terms of HPF of lower orders, and in particular of classical Padé forms. We give some examples illustrating this algorithm, in particular, another way of constructing quadratic HPF already described by different authors. Finally, we briefly study a family of cubic HPF.

  20. Convergence of a Fourier-spline representation for the full-turn map generator

    SciTech Connect

    Warnock, R.L.; Ellison, J.A.

    1997-04-01

    Single-turn data from a symplectic tracking code can be used to construct a canonical generator for a full-turn symplectic map. This construction has been carried out numerically in canonical polar coordinates, the generator being obtained as a Fourier series in angle coordinates with coefficients that are spline functions of action coordinates. Here the authors provide a mathematical basis for the procedure, finding sufficient conditions for the existence of the generator and convergence of the Fourier-spline expansion. The analysis gives insight concerning analytic properties of the generator, showing that in general there are branch points as a function of angle and inverse square root singularities at the origin as a function of action.

  1. The Design and Characterization of Wideband Spline-profiled Feedhorns for Advanced Actpol

    NASA Technical Reports Server (NTRS)

    Simon, Sara M.; Austermann, Jason; Beall, James A.; Choi, Steve K.; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Henderson, Shawn W.; Hills, Felicity B.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Josaitis, Alec; Koopman, Brian J.; McMahon, Jeff J.; Nati, Federico; Newburgh, Laura; Niemack, Michael D.; Salatino, Maria; Schillaci, Alessandro; Wollack, Edward J.

    2016-01-01

    Advanced ACTPol (AdvACT) is an upgraded camera for the Atacama Cosmology Telescope (ACT) that will measure the cosmic microwave background in temperature and polarization over a wide range of angular scales and five frequency bands from 28-230 GHz. AdvACT will employ four arrays of feedhorn-coupled, polarization- sensitive multichroic detectors. To accommodate the higher pixel packing densities necessary to achieve Ad- vACTs sensitivity goals, we have developed and optimized wideband spline-profiled feedhorns for the AdvACT multichroic arrays that maximize coupling efficiency while carefully controlling polarization systematics. We present the design, fabrication, and testing of wideband spline-profiled feedhorns for the multichroic arrays of AdvACT.

  2. Vibration of shear-deformable rectangular plates using a spline-function Rayleigh-Ritz approach

    NASA Astrophysics Data System (ADS)

    Wang, S.; Dawe, D. J.

    1993-02-01

    The prediction of the natural frequencies of vibration of rectangular plates or orthotropic laminates is described, through the use of B-spline functions as trial functions in a Rayleigh Ritz approach. Through-thickness shear deformation effects are included in the analysis and hence assumptions have to be made for the spatial variation over the plate middle surface of each of the lateral deflection and the two rotation components. Two versions of the spline-function Rayleigh-Ritz approach are described: in one of these the deflection and rotations are represented by functions of the same polynomial order, while in the other a lower-order representation is used for each rotation component in one of the coordinate directions. It is shown in a number of applications that the former version leads to shear-locking behavior while the latter version avoids this behavior and is suitable for the analysis of both thick and thin plates.

  3. Surface evaluation with Ronchi test by using Malacara formula, genetic algorithms, and cubic splines

    NASA Astrophysics Data System (ADS)

    Cordero-Dávila, Alberto; González-García, Jorge

    2010-08-01

    In the manufacturing process of an optical surface with rotational symmetry the ideal ronchigram is simulated and compared with the experimental ronchigram. From this comparison the technician, based on your experience, estimated the error on the surface. Quantitatively, the error on the surface can be described by a polynomial e(ρ2) and the coefficients can be estimated from data of the ronchigrams (real and ideal) to solve a system of nonlinear differential equations which are related to the Malacara formula of the transversal aberration. To avoid the problems inherent in the use of polynomials it proposed to describe the errors on the surface by means of cubic splines. The coefficients of each spline are estimated from a discrete set of errors (ρi,ei) and these are evaluated by means of genetic algorithms to reproduce the experimental ronchigrama starting from the ideal.

  4. Vector splines on the sphere with application to the estimation of vorticity and divergence from discrete, noisy data

    NASA Technical Reports Server (NTRS)

    Wahba, G.

    1982-01-01

    Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.

  5. Estimation of Some Parameters from Morse-Morse-Spline-Van Der Waals Intermolecular Potential

    SciTech Connect

    Coroiu, I.

    2007-04-23

    Some parameters such as transport cross-sections and isotopic thermal diffusion factor have been calculated from an improved intermolecular potential, Morse-Morse-Spline-van der Waals (MMSV) potential proposed by R.A. Aziz et al. The treatment was completely classical and no corrections for quantum effects were made. The results would be employed for isotope separations of different spherical and quasi-spherical molecules.

  6. Cubic spline reflectance estimates using the Viking lander camera multispectral data

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Huck, F. O.

    1976-01-01

    A technique was formulated for constructing spectral reflectance estimates from multispectral data obtained with the Viking lander cameras. The output of each channel was expressed as a linear function of the unknown spectral reflectance producing a set of linear equations which were used to determine the coefficients in a representation of the spectral reflectance estimate as a natural cubic spline. The technique was used to produce spectral reflectance estimates for a variety of actual and hypothetical spectral reflectances.

  7. The trigonometric interpolation spline surface and its application in image zooming

    NASA Astrophysics Data System (ADS)

    Li, Juncheng; Yang, Lian

    2015-07-01

    The trigonometric polynomial spline surface generated over the space {1, sint, cost, sin2t, cos2t} is presented in this work. The proposed surface can automatically interpolate all the given data points and satisfy C2 continuous without solving equation systems. Then, image zooming making use of the proposed surface is investigated. Experimental results show that the proposed surface is effective for dealing with image zooming problems.

  8. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    PubMed

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p < 0.001) than the one obtained using the best solution proposed in our previous work. When applied to align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p < 0.001). The consequences of the temporal alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  9. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

    SciTech Connect

    Jerome Blair

    2008-05-15

    An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

  10. Clustered mixed nonhomogeneous Poisson process spline models for the analysis of recurrent event panel data.

    PubMed

    Nielsen, J D; Dean, C B

    2008-09-01

    A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.

  11. Comparing tongue shapes from ultrasound imaging using smoothing spline analysis of variance.

    PubMed

    Davidson, Lisa

    2006-07-01

    Ultrasound imaging of the tongue is increasingly common in speech production research. However, there has been little standardization regarding the quantification and statistical analysis of ultrasound data. In linguistic studies, researchers may want to determine whether the tongue shape for an articulation under two different conditions (e.g., consonants in word-final versus word-medial position) is the same or different. This paper demonstrates how the smoothing spline ANOVA (SS ANOVA) can be applied to the comparison of tongue curves [Gu, Smoothing Spline ANOVA Models (Springer, New York, 2002)]. The SS ANOVA is a technique for determining whether or not there are significant differences between the smoothing splines that are the best fits for two data sets being compared. If the interaction term of the SS ANOVA model is statistically significant, then the groups have different shapes. Since the interaction may be significant even if only a small section of the curves are different (i.e., the tongue root is the same, but the tip of one group is raised), Bayesian confidence intervals are used to determine which sections of the curves are statistically different. SS ANOVAs are illustrated with some data comparing obstruents produced in word-final and word-medial coda position.

  12. A nonrational B-spline profiled horn with high displacement amplification for ultrasonic welding.

    PubMed

    Nguyen, Huu-Tu; Nguyen, Hai-Dang; Uan, Jun-Yen; Wang, Dung-An

    2014-12-01

    A new horn with high displacement amplification for ultrasonic welding is developed. The profile of the horn is a nonrational B-spline curve with an open uniform knot vector. The ultrasonic actuation of the horn exploits the first longitudinal displacement mode of the horn. The horn is designed by an optimization scheme and finite element analyses. Performances of the proposed horn have been evaluated by experiments. The displacement amplification of the proposed horn is 41.4% and 8.6% higher than that of the traditional catenoidal horn and a Bézier-profile horn, respectively, with the same length and end surface diameters. The developed horn has a lower displacement amplification than the nonuniform rational B-spline profiled horn but a much smoother stress distribution. The developed horn, the catenoidal horn, and the Bézier horn are fabricated and used for ultrasonic welding of lap-shear specimens. The bonding strength of the joints welded by the open uniform nonrational B-spline (OUNBS) horn is the highest among the three horns for the various welding parameters considered. The locations of the failure mode and the distribution of the voids of the specimens are investigated to explain the reason of the high bonding strength achieved by the OUNBS horn.

  13. Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes

    SciTech Connect

    Garrett, W.R.

    1984-03-06

    A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller Belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. A prototype includes of this a bellows seal instead of the floating seal at the upper end of the tool, and a bellows in the side of the lubricant chamber provides volume compensation. A second lubricant chamber is provided below the pressure seal, the lower end of the second chamber being closed by a bellows seal and a further bellows in the side of the second chamber providing volume compensation. Modifications provide hydraulic jars.

  14. Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes

    SciTech Connect

    Garrett, W.R.

    1981-08-04

    A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Preferably the spring means itself is a double acting compression spring means wherein the same spring means is compressed whether the joint is extended or contracted. The damper has a like low spring rate over a considerable range of deflection, both upon extension and contraction of the joint, but a gradually then rapidly increased spring rate upon approaching the travel limits in each direction. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The spring rings make only such line contact with one of the telescoping members as is required for guidance therefrom, and no contact with the other member. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. Magnetic and electrical means are provided to check for the presence and condition of the lubricant. To increase load capacity the spring means is made of a number of components acting in parallel.

  15. Preliminary results of flight tests of vortex attenuating splines. [evaluation of effectiveness of wingtip vortex attenuating device

    NASA Technical Reports Server (NTRS)

    Hastings, E. C., Jr.; Shanks, R. E.; Champine, R. A.; Copeland, W. L.; Young, D. C.

    1974-01-01

    Flight tests have been conducted to evaluate the effectiveness of a wingtip vortex attenuating device, referred to as a spline. Vortex penetrations were made with a PA-28 behind a C-54 aircraft with and without wingtip splines attached and the resultant rolling acceleration was measured and related to the roll acceleration capability of the PA-28. Tests were conducted over a range of separation distances from about 5 nautical miles (n. mi.) to less than 1 n. mi. Preliminary results indicate that, with the splines installed, there was a significant reduction in the vortex induced roll acceleration experienced by the PA-28 probe aircraft, and the distance at which the PA-28 roll control became ineffective was reduced from 2.5 n. mi. to 0.6 n. mi., or less. There was a slight increase in approach noise (approximately 4 db) with the splines installed due primarily to the higher engine power used during approach. Although splines significantly reduced the C-54 rate of climb, the rates available with four engines were acceptable for this test program. Splines did not introduce any noticeable change in the handling qualities of the C-54.

  16. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point

    SciTech Connect

    Shao, Chenxi Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong

    2015-07-15

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.

  17. MRI Signal Intensity Based B-Spline Nonrigid Registration for Pre- and Intraoperative Imaging During Prostate Brachytherapy

    PubMed Central

    Oguro, Sota; Tokuda, Junichi; Elhawary, Haytham; Haker, Steven; Kikinis, Ron; Tempany, Clare M.C.; Hata, Nobuhiko

    2009-01-01

    Purpose To apply an intensity-based nonrigid registration algorithm to MRI-guided prostate brachytherapy clinical data and to assess its accuracy. Materials and Methods A nonrigid registration of preoperative MRI to intraoperative MRI images was carried out in 16 cases using a Basis-Spline algorithm in a retrospective manner. The registration was assessed qualitatively by experts’ visual inspection and quantitatively by measuring the Dice similarity coefficient (DSC) for total gland (TG), central gland (CG), and peripheral zone (PZ), the mutual information (MI) metric, and the fiducial registration error (FRE) between corresponding anatomical landmarks for both the nonrigid and a rigid registration method. Results All 16 cases were successfully registered in less than 5 min. After the nonrigid registration, DSC values for TG, CG, PZ were 0.91, 0.89, 0.79, respectively, the MI metric was −0.19 ± 0.07 and FRE presented a value of 2.3 ± 1.8 mm. All the metrics were significantly better than in the case of rigid registration, as determined by one-sided t-tests. Conclusion The intensity-based nonrigid registration method using clinical data was demonstrated to be feasible and showed statistically improved metrics when compare to only rigid registration. The method is a valuable tool to integrate pre- and intraoperative images for brachytherapy. PMID:19856437

  18. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. PMID:25676007

  19. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time.

  20. Distributed lag and spline modeling for predicting energy expenditure from accelerometry in youth

    PubMed Central

    Chen, Kong Y.; Acra, Sari A.; Buchowski, Maciej S.

    2010-01-01

    Movement sensing using accelerometers is commonly used for the measurement of physical activity (PA) and estimating energy expenditure (EE) under free-living conditions. The major limitation of this approach is lack of accuracy and precision in estimating EE, especially in low-intensity activities. Thus the objective of this study was to investigate benefits of a distributed lag spline (DLS) modeling approach for the prediction of total daily EE (TEE) and EE in sedentary (1.0–1.5 metabolic equivalents; MET), light (1.5–3.0 MET), and moderate/vigorous (≥3.0 MET) intensity activities in 10- to 17-year-old youth (n = 76). We also explored feasibility of the DLS modeling approach to predict physical activity EE (PAEE) and METs. Movement was measured by Actigraph accelerometers placed on the hip, wrist, and ankle. With whole-room indirect calorimeter as the reference standard, prediction models (Hip, Wrist, Ankle, Hip+Wrist, Hip+Wrist+Ankle) for TEE, PAEE, and MET were developed and validated using the fivefold cross-validation method. The TEE predictions by these DLS models were not significantly different from the room calorimeter measurements (all P > 0.05). The Hip+Wrist+Ankle predicted TEE better than other models and reduced prediction errors in moderate/vigorous PA for TEE, MET, and PAEE (all P < 0.001). The Hip+Wrist reduced prediction errors for the PAEE and MET at sedentary PA (P = 0.020 and 0.021) compared with the Hip. Models that included Wrist correctly classified time spent at light PA better than other models. The means and standard deviations of the prediction errors for the Hip+Wrist+Ankle and Hip were 0.4 ± 144.0 and 1.5 ± 164.7 kcal for the TEE, 0.0 ± 84.2 and 1.3 ± 104.7 kcal for the PAEE, and −1.1 ± 97.6 and −0.1 ± 108.6 MET min for the MET models. We conclude that the DLS approach for accelerometer data improves detailed EE prediction in youth. PMID:19959770

  1. Spectral optical layer properties of cirrus from collocated airborne measurements and simulations

    NASA Astrophysics Data System (ADS)

    Finger, Fanny; Werner, Frank; Klingebiel, Marcus; Ehrlich, André; Jäkel, Evelyn; Voigt, Matthias; Borrmann, Stephan; Spichtinger, Peter; Wendisch, Manfred

    2016-06-01

    Spectral upward and downward solar irradiances from vertically collocated measurements above and below a cirrus layer are used to derive cirrus optical layer properties such as spectral transmissivity, absorptivity, reflectivity, and cloud top albedo. The radiation measurements are complemented by in situ cirrus crystal size distribution measurements and radiative transfer simulations based on the microphysical data. The close collocation of the radiative and microphysical measurements, above, beneath, and inside the cirrus, is accomplished by using a research aircraft (Learjet 35A) in tandem with the towed sensor platform AIRTOSS (AIRcraft TOwed Sensor Shuttle). AIRTOSS can be released from and retracted back to the research aircraft by means of a cable up to a distance of 4 km. Data were collected from two field campaigns over the North Sea and the Baltic Sea in spring and late summer 2013. One measurement flight over the North Sea proved to be exemplary, and as such the results are used to illustrate the benefits of collocated sampling. The radiative transfer simulations were applied to quantify the impact of cloud particle properties such as crystal shape, effective radius reff, and optical thickness τ on cirrus spectral optical layer properties. Furthermore, the radiative effects of low-level, liquid water (warm) clouds as frequently observed beneath the cirrus are evaluated. They may cause changes in the radiative forcing of the cirrus by a factor of 2. When low-level clouds below the cirrus are not taken into account, the radiative cooling effect (caused by reflection of solar radiation) due to the cirrus in the solar (shortwave) spectral range is significantly overestimated.

  2. Application of collocated GPS and seismic sensors to earthquake monitoring and early warning.

    PubMed

    Li, Xingxing; Zhang, Xiaohong; Guo, Bofeng

    2013-10-24

    We explore the use of collocated GPS and seismic sensors for earthquake monitoring and early warning. The GPS and seismic data collected during the 2011 Tohoku-Oki (Japan) and the 2010 El Mayor-Cucapah (Mexico) earthquakes are analyzed by using a tightly-coupled integration. The performance of the integrated results is validated by both time and frequency domain analysis. We detect the P-wave arrival and observe small-scale features of the movement from the integrated results and locate the epicenter. Meanwhile, permanent offsets are extracted from the integrated displacements highly accurately and used for reliable fault slip inversion and magnitude estimation.

  3. On the Separation of Bedforms by Using Robust Spline Filter and Wavelet Transform, application on the Parana River, Argentina

    NASA Astrophysics Data System (ADS)

    Gutierrez, R. R.; Abad, J. D.; Parsons, D. R.

    2011-12-01

    The quantification of the variability of bedform geometry is necessary for scientific and practical purposes. For the former purpose, it is necessary for modeling bed roughness cross-strata sets, vertical sorting, sediment transport rates, transition between two-dimensional and three-dimensional dunes, velocity pulsations, flow over bedforms, interaction between flow over bedforms and groundwater, and transport of contaminants. For practical purposes the study of the variability of bedforms is important to predict floods and flow resistance, to predict uplifting of manmade structures underneath a river beds, to track future changes of bedform and biota following dam removal, to estimate the relationship between bedform characteristics and biota, in river restoration, among others. Currently there is not a standard nomenclature and procedure to separate bedform features such as sand waves, dunes and ripples which are commonly present in large rivers. Likewise, there is not a standard definition of the scope for the different scales of such bedform features. The present study proposes a standardization of the nomenclature and symbolic representation of bedform features and elaborates on the combined application of robust spline filter and continuous wavelet transforms to separate the morphodynamic features. A fully automated robust spline procedure for uniformly sampled datasets is used. The algorithm, based on a penalized least squares method, allows fast smoothing of uniformly sampled data elements by means of the discrete cosine transform. The wavelet transforms, which overcome some limitations of the Fourier transforms, are applied to identify the spectrum of bedform wavelengths. The proposed separation method is applied to a 370-m width and 1.028-km length swath bed morphology data of the Parana River, one of the world's largest rivers, located in Argentina. After the separation is carried out, the descriptors (e.g. wavelength, slope, and amplitude for both

  4. Molecular detection and Smoothing spline clustering of the IBV strains detected in China during 2011-2012.

    PubMed

    Zhang, Zhikun; Zhou, Yingshun; Wang, Hongning; Zeng, Fanya; Yang, Xin; Zhang, Yi; Zhang, Anyun

    2016-01-01

    Infectious bronchitis virus (IBV) is a highly variable virus with a large number of genotypes. During 2011-2012, nineteen wild IBV strains were isolated in China. Sequence analysis showed that these isolates were divided into five sub-clusters: A2-like, CKCHLDL08I-like, SAIBK-like, KM91-like and TW97/4-like. Phylogenetic analysis based on the 1118 sequences available on line suggested that all IBVs were classified into six clusters. The prevalent strains including all the isolates were in cluster VI with a 0.194-0.259 genetic distance to Mass type vaccines. In addition, we introduced the smoothing spline clustering (SSC) method to estimate the highly variable sites for some sub-clusters. The results showed that highly variable sites range from sub-clusters, the N-terminal sequences of 4/91-like, TW97/4-like and Arkansas-like are more variable than other sub-clusters. This is the first time that the SSC method has been used for the evolution study of IBV.

  5. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests.

    PubMed

    Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki

    2015-09-04

    Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull-Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm   that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON).

  6. ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning.

    PubMed

    Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta

    2016-05-01

    Toxigenic cyanobacteria are one of the main health risks associated with water resources worldwide, as their toxins can affect humans and fauna exposed via drinking water, aquaculture and recreation. Microscopy monitoring of cyanobacteria in water bodies and massive growth systems is a routine operation for cell abundance and growth estimation. Here we present ACQUA (Automated Cyanobacterial Quantification Algorithm), a new fully automated image analysis method designed for filamentous genera in Bright field microscopy. A pre-processing algorithm has been developed to highlight filaments of interest from background signals due to other phytoplankton and dust. A spline-fitting algorithm has been designed to recombine interrupted and crossing filaments in order to perform accurate morphometric analysis and to extract the surface pattern information of highlighted objects. In addition, 17 specific pattern indicators have been developed and used as input data for a machine-learning algorithm dedicated to the recognition between five widespread toxic or potentially toxic filamentous genera in freshwater: Aphanizomenon, Cylindrospermopsis, Dolichospermum, Limnothrix and Planktothrix. The method was validated using freshwater samples from three Italian volcanic lakes comparing automated vs. manual results. ACQUA proved to be a fast and accurate tool to rapidly assess freshwater quality and to characterize cyanobacterial assemblages in aquatic environments. PMID:27012737

  7. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests

    PubMed Central

    Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki

    2015-01-01

    Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON). PMID:26404302

  8. Bundle Block Adjustment with 3D Natural Cubic Splines

    PubMed Central

    Lee, Won Hee; Yu, Kiyun

    2009-01-01

    Point-based methods undertaken by experienced human operators are very effective for traditional photogrammetric activities, but they are not appropriate in the autonomous environment of digital photogrammetry. To develop more reliable and accurate techniques, higher level objects with linear features accommodating elements other than points are alternatively adopted for aerial triangulation. Even though recent advanced algorithms provide accurate and reliable linear feature extraction, the use of such features that can consist of complex curve forms is more difficult than extracting a discrete set of points. Control points that are the initial input data, and break points that are end points of segmented curves, are readily obtained. Employment of high level features increases the feasibility of using geometric information and provides access to appropriate analytical solutions for advanced computer technology. PMID:22303144

  9. When Joy Matters: The Importance of Hedonic Stimulation in Collocated Collaboration with Large-Displays

    NASA Astrophysics Data System (ADS)

    Novak, Jasminko; Schmidt, Susanne

    Hedonic aspects are increasingly considered as an important factor in user acceptance of information systems, especially for activities with high self-fulfilling value for the users. In this paper we report on the results of an experiment investigating the hedonic qualities of an interactive large-display workspace for collocated collaboration in sales-oriented travel advisory. The results show a higher hedonic stimulation quality of a touch-based large-display travel advisory workspace than that of a traditional workspace with catalogues. Together with the feedback of both customers and travel agents this suggests the adequacy of using touch-based large-displays with visual workspaces for supporting the hedonic stimulation of user experience in collocated collaboration settings. The relation of high perception of hedonic quality to positive emotional attitudes towards the use of a large-display workspace indicates that even in utilitarian activities (e.g. reaching sales goals for travel agents) hedonic aspects can play an important role. This calls for reconsidering the traditional divide of hedonic vs. utilitarian systems in current literature, to a more balanced view towards systems which provide both utilitarian and hedonic sources of value to the user.

  10. Estimated variability of National Atmospheric Deposition Program/Mercury Deposition Network measurements using collocated samplers

    USGS Publications Warehouse

    Wetherbee, G.A.; Gay, D.A.; Brunette, R.C.; Sweet, C.W.

    2007-01-01

    The National Atmospheric Deposition Program/Mercury Deposition Network (MDN) provides long-term, quality-assured records of mercury in wet deposition in the USA and Canada. Interpretation of spatial and temporal trends in the MDN data requires quantification of the variability of the MDN measurements. Variability is quantified for MDN data from collocated samplers at MDN sites in two states, one in Illinois and one in Washington. Median absolute differences in the collocated sampler data for total mercury concentration are approximately 11% of the median mercury concentration for all valid 1999-2004 MDN data. Median absolute differences are between 3.0% and 14% of the median MDN value for collector catch (sample volume) and between 6.0% and 15% of the median MDN value for mercury wet deposition. The overall measurement errors are sufficiently low to resolve between NADP/MDN measurements by ??2 ng??l-1 and ??2 ????m-2?? year-1, which are the contour intervals used to display the data on NADP isopleths maps for concentration and deposition, respectively. ?? Springer Science+Business Media B.V. 2007.

  11. Geographic analysis of the feasibility of collocating algal biomass production with wastewater treatment plants.

    PubMed

    Fortier, Marie-Odile P; Sturm, Belinda S M

    2012-10-16

    Resource demand analyses indicate that algal biodiesel production would require unsustainable amounts of freshwater and fertilizer supplies. Alternatively, municipal wastewater effluent can be used, but this restricts production of algae to areas near wastewater treatment plants (WWTPs), and to date, there has been no geospatial analysis of the feasibility of collocating large algal ponds with WWTPs. The goals of this analysis were to determine the available areas by land cover type within radial extents (REs) up to 1.5 miles from WWTPs; to determine the limiting factor for algal production using wastewater; and to investigate the potential algal biomass production at urban, near-urban, and rural WWTPs in Kansas. Over 50% and 87% of the land around urban and rural WWTPs, respectively, was found to be potentially available for algal production. The analysis highlights a trade-off between urban WWTPs, which are generally land-limited but have excess wastewater effluent, and rural WWTPs, which are generally water-limited but have 96% of the total available land. Overall, commercial-scale algae production collocated with WWTPs is feasible; 29% of the Kansas liquid fuel demand could be met with implementation of ponds within 1 mile of all WWTPs and supplementation of water and nutrients when these are limited. PMID:22970803

  12. Entropy Stable Staggered Grid Spectral Collocation for the Burgers' and Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2015-01-01

    Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for Burgers' and the compressible Navier-Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [1, 2], extends the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to a combination of tensor product Legendre-Gauss (LG) and LGL points. The new semi-discrete operators discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality for both Burgers' and the compressible Navier-Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly to implement. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinearly stability proof for the compressible Navier-Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).

  13. Improving Assimilation of Microwave Radiances in Cloudy Situations with Collocated High Resolution Imager Cloud Mask

    NASA Astrophysics Data System (ADS)

    Han, H.; Li, J.; Goldberg, M.; Wang, P.; Li, Z.

    2014-12-01

    Tropical cyclones (TCs) accompanied with heavy rainfall and strong wind are high impact weather systems, often causing extensive property damage and even fatalities when landed. Better prediction of TCs can lead to substantial reduction of social and economic damage; there are growing interests in the enhanced satellite data assimilation for improving TC forecasts. Accurate cloud detection is one of the most important factors in satellite data assimilation due to the uncertainties of cloud properties and their impacts on satellite observed radiances. To enhance the accuracy of cloud detection and improve the TC forecasting, microwave measurements are collocated with high spatial resolution imager cloud mask. The collocated advanced microwave sounder measurements are assimilated for the hurricane Sandy (2012) and typhoon Haiyan (2013) forecasting using the Weather Research and Forecasting (WRF) model and the 3DVAR-based Gridpoint Statistical Interpolation (GSI) data assimilation system. Experiments will be carried out to determine a cloud cover threshold to distinguish between cloud affected and cloud unaffected footprints. The results indicate that the use of the high spatial resolution imager cloud mask can improve the accuracy of TC forecasts by eliminating cloud contaminated pixels. The methodology used in this study is applicable to advanced microwave sounders and high spatial resolution imagers, such as ATMS/VIIRS onboard NPP and JPSS, and IASI/AVHRR from Metop, for the improved TC track and intensity forecasts.

  14. Uncertainty Propagation for Turbulent, Compressible Flow in a Quasi-1D Nozzle Using Stochastic Methods

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise

    2003-01-01

    This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.

  15. The Norwegian Healthier Goats program--modeling lactation curves using a multilevel cubic spline regression model.

    PubMed

    Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S

    2014-07-01

    In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield.

  16. Spline function approximation techniques for image geometric distortion representation. [for registration of multitemporal remote sensor imagery

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1975-01-01

    Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.

  17. Bivariate spline solution of time dependent nonlinear PDE for a population density over irregular domains.

    PubMed

    Gutierrez, Juan B; Lai, Ming-Jun; Slavov, George

    2015-12-01

    We study a time dependent partial differential equation (PDE) which arises from classic models in ecology involving logistic growth with Allee effect by introducing a discrete weak solution. Existence, uniqueness and stability of the discrete weak solutions are discussed. We use bivariate splines to approximate the discrete weak solution of the nonlinear PDE. A computational algorithm is designed to solve this PDE. A convergence analysis of the algorithm is presented. We present some simulations of population development over some irregular domains. Finally, we discuss applications in epidemiology and other ecological problems.

  18. On spline and polynomial interpolation of low earth orbiter data: GRACE example

    NASA Astrophysics Data System (ADS)

    Uz, Metehan; Ustun, Aydin

    2016-04-01

    GRACE satellites, which are equipped with specific science instruments such as K/Ka band ranging system, have still orbited around the earth since 17 March 2002. In this study the kinematic and reduced-dynamic orbits of GRACE-A/B were determined to 10 seconds interval by using Bernese 5.2 GNSS software during May, 2010 and also daily orbit solutions were validated with GRACE science orbit, GNV1B. The RMS values of kinematic and reduced-dynamic orbit validations were about 2.5 and 1.5 cm, respectively. 
Throughout the time period of interest, more or less data gaps were encountered in the kinematic orbits due to lack of GPS measurements and satellite manoeuvres. Thus, the least square polynomial and the cubic spline approaches (natural, not-a-knot and clamped) were tested to interpolate both small data gaps and 5 second interval on precise orbits. The latter is necessary for example in case of data densification in order to use the K / Ka band observations. The interpolated coordinates to 5 second intervals were also validated with GNV1B orbits. The validation results show that spline approaches have delivered approximately 1 cm RMS values and are better than those of least square polynomial interpolation. When data gaps occur on daily orbit, the spline validation results became worse depending on the size of the data gaps. Hence, the daily orbits were fragmented into small arcs including 30, 40 or 50 knots to evaluate effect of the least square polynomial interpolation on data gaps. From randomly selected daily arc sets, which are belonging to different times, 5, 10, 15 and 20 knots were removed, independently. While 30-knot arcs were evaluated with fifth-degree polynomial, sixth-degree polynomial was employed to interpolate artificial gaps over 40- and 50-knot arcs. The differences of interpolated and removed coordinates were tested with each other by considering GNV1B validation RMS result, 2.5 cm. With 95% confidence level, data gaps up to 5 and 10 knots can

  19. Power spectral density estimation by spline smoothing in the frequency domain

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Thompson, J. R.

    1972-01-01

    An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying FFT techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.

  20. Power spectral density estimation by spline smoothing in the frequency domain.

    NASA Technical Reports Server (NTRS)

    De Figueiredo, R. J. P.; Thompson, J. R.

    1972-01-01

    An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying Fast Fourier Transform techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.-

  1. A unified approach for nonlinear vibration analysis of curved structures using non-uniform rational B-spline representation

    NASA Astrophysics Data System (ADS)

    Askari, H.; Esmailzadeh, E.; Barari, A.

    2015-09-01

    A novel procedure for the nonlinear vibration analysis of curved beam is presented. The Non-Uniform Rational B-Spline (NURBS) is combined with the Euler-Bernoulli beam theory to define the curvature of the structure. The governing equation of motion and the general frequency formula, using the NURBS variables, is applicable for any type of curvatures, is developed. The Galerkin procedure is implemented to obtain the nonlinear ordinary differential equation of curved system and the multiple time scales method is utilized to find the corresponding frequency responses. As a case study, the nonlinear vibration of carbon nanotubes with different shapes of curvature is investigated. The effect of oscillation amplitude and the waviness on the natural frequency of the curved nanotube is evaluated and the primary resonance case of system with respect to the variations of different parameters is discussed. For the sake of comparison of the results obtained with those from the molecular dynamic simulation, the natural frequencies evaluated from the proposed approach are compared with those reported in literature for few types of carbon nanotube simulation.

  2. DBSR_HF: A B-spline Dirac-Hartree-Fock program

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg; Froese Fischer, Charlotte

    2016-05-01

    A B-spline version of a general Dirac-Hartree-Fock program is described. The usual differential equations are replaced by a set of generalized eigenvalue problems of the form (Ha -εa B) Pa = 0, where Ha and B are the Hamiltonian and overlap matrices, respectively, and Pa is the two-component relativistic orbit in the B-spline basis. A default universal grid allows for flexible adjustment to different nuclear models. When two orthogonal orbitals are both varied, the energy must also be stationary with respect to orthonormal transformations. At such a stationary point the off-diagonal Lagrange multipliers may be eliminated through projection operators. The self-consistent field procedure exhibits excellent convergence. Several atomic states can be considered simultaneously, including some configuration-interaction calculations. The program provides several options for the treatment of Breit interaction and QED corrections. The information about atoms up to Z = 104 is stored by the program. Along with a simple interface through command-line arguments, this information allows the user to run the program with minimal initial preparations.

  3. Transforming wealth: using the inverse hyperbolic sine (IHS) and splines to predict youth's math achievement.

    PubMed

    Friedline, Terri; Masa, Rainier D; Chowa, Gina A N

    2015-01-01

    The natural log and categorical transformations commonly applied to wealth for meeting the statistical assumptions of research may not always be appropriate for adjusting for skewness given wealth's unique properties. Finding and applying appropriate transformations is becoming increasingly important as researchers consider wealth as a predictor of well-being. We present an alternative transformation-the inverse hyperbolic sine (IHS)-for simultaneously dealing with skewness and accounting for wealth's unique properties. Using the relationship between household wealth and youth's math achievement as an example, we apply the IHS transformation to wealth data from US and Ghanaian households. We also explore non-linearity and accumulation thresholds by combining IHS transformed wealth with splines. IHS transformed wealth relates to youth's math achievement similarly when compared to categorical and natural log transformations, indicating that it is a viable alternative to other transformations commonly used in research. Non-linear relationships and accumulation thresholds emerge that predict youth's math achievement when splines are incorporated. In US households, accumulating debt relates to decreases in math achievement whereas accumulating assets relates to increases in math achievement. In Ghanaian households, accumulating assets between the 25th and 50th percentiles relates to increases in youth's math achievement.

  4. Transforming wealth: using the inverse hyperbolic sine (IHS) and splines to predict youth's math achievement.

    PubMed

    Friedline, Terri; Masa, Rainier D; Chowa, Gina A N

    2015-01-01

    The natural log and categorical transformations commonly applied to wealth for meeting the statistical assumptions of research may not always be appropriate for adjusting for skewness given wealth's unique properties. Finding and applying appropriate transformations is becoming increasingly important as researchers consider wealth as a predictor of well-being. We present an alternative transformation-the inverse hyperbolic sine (IHS)-for simultaneously dealing with skewness and accounting for wealth's unique properties. Using the relationship between household wealth and youth's math achievement as an example, we apply the IHS transformation to wealth data from US and Ghanaian households. We also explore non-linearity and accumulation thresholds by combining IHS transformed wealth with splines. IHS transformed wealth relates to youth's math achievement similarly when compared to categorical and natural log transformations, indicating that it is a viable alternative to other transformations commonly used in research. Non-linear relationships and accumulation thresholds emerge that predict youth's math achievement when splines are incorporated. In US households, accumulating debt relates to decreases in math achievement whereas accumulating assets relates to increases in math achievement. In Ghanaian households, accumulating assets between the 25th and 50th percentiles relates to increases in youth's math achievement. PMID:25432618

  5. 3D range image resampling using B-spline surface fitting

    NASA Astrophysics Data System (ADS)

    Li, Songtao; Zhao, Dongming

    2000-05-01

    Many optical range sensors use an Equal Angle Increment (EAI) sampling. This type of sensors uses rotating mirrors with constant angular velocity for radar and triangulation techniques, where the sensor sends and receives modulated coherent light through the mirror. Such an EAI model generates data for surface geometrical description that has to be converted, in many applications, into data which meet the desired Equal Distance Increment orthographic projection model. For an accurate analysis in 3D images, an interpolation scheme is needed to resample the range data into spatially equally-distance sampling data that emulate the Cartesian orthographic projection model. In this paper, a resampling approach using a B-Spline surface fitting is proposed. The first step is to select a new scale for all X, Y, Z directions based on the 3D Cartesian coordinates of range data obtained from the sensor parameters. The size of the new range image and the new coordinates of each point are then computed. The new range value is obtained using a B-Spline surface fitting based on the new Cartesian coordinates. The experiments show that this resampling approach provides a geometrically accurate solution for many industrial applications which deploy the EAI sampling sensors.

  6. Three-dimensional range data interpolation using B-spline surface fitting

    NASA Astrophysics Data System (ADS)

    Li, Songtao; Zhao, Dongming

    2000-05-01

    Many optical range sensors use an Equal Angle Increment (EAI) sampling. This type of sensors uses rotating mirrors with a constant angular velocity using radar and triangulation techniques, where the sensor sends and receives the modulated coherent light through the mirror. Such an EAI model generates data for surface geometrical description that has to be converted, in many applications, into data which meet the desired Equal Distance Increment orthographic projection model. For an accurate analysis in 3D images, a 3D interpolation scheme is needed to resample the range data into spatially equally-distance sampling data that emulate the Cartesian orthographic projection model. In this paper, a resampling approach using a B-Spline surface fitting is proposed. The first step is to select a new scale for all X, Y, Z directions based on the 3D Cartesian coordinates of range data obtained from the sensor parameters. The size of the new range image and the new coordinates of each point are then computed according to the actual references of (X, Y, Z) coordinates and the new scale. The new range data are interpolated using a B-Spline surface fitting based on the new Cartesian coordinates. The experiments show that this 3D interpolation approach provides a geometrically accurate solution for many industrial applications which deploy the EAI sampling sensors.

  7. Fuzzy B-spline optimization for urban slum three-dimensional reconstruction using ENVISAT satellite data

    NASA Astrophysics Data System (ADS)

    Marghany, Maged

    2014-06-01

    A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.

  8. Vs30 and spectral response from collocated shallow, active- and passive-source Vs data at 27 sites in Puerto Rico

    USGS Publications Warehouse

    Odum, Jack K.; Stephenson, William J.; Williams, Robert A.; von Hillebrandt-Andrade, Christa

    2013-01-01

    Shear‐wave velocity (VS) and time‐averaged shear‐wave velocity to 30 m depth (VS30) are the key parameters used in seismic site response modeling and earthquake engineering design. Where VS data are limited, available data are often used to develop and refine map‐based proxy models of VS30 for predicting ground‐motion intensities. In this paper, we present shallow VS data from 27 sites in Puerto Rico. These data were acquired using a multimethod acquisition approach consisting of noninvasive, collocated, active‐source body‐wave (refraction/reflection), active‐source surface wave at nine sites, and passive‐source surface‐wave refraction microtremor (ReMi) techniques. VS‐versus‐depth models are constructed and used to calculate spectral response plots for each site. Factors affecting method reliability are analyzed with respect to site‐specific differences in bedrock VS and spectral response. At many but not all sites, body‐ and surface‐wave methods generally determine similar depths to bedrock, and it is the difference in bedrock VS that influences site amplification. The predicted resonant frequencies for the majority of the sites are observed to be within a relatively narrow bandwidth of 1–3.5 Hz. For a first‐order comparison of peak frequency position, predictive spectral response plots from eight sites are plotted along with seismograph instrument spectra derived from the time series of the 16 May 2010 Puerto Rico earthquake. We show how a multimethod acquisition approach using collocated arrays compliments and corroborates VS results, thus adding confidence that reliable site characterization information has been obtained.

  9. Lexical Collocations and Their Impact on the Online Writing of Taiwanese College English Majors and Non-English Majors

    ERIC Educational Resources Information Center

    Hsu, Jeng-yih

    2007-01-01

    The present study investigates the use of English lexical collocations and their relation to the online writing of Taiwanese college English majors and non-English majors. Data for the study were collected from 41 English majors and 21 non-English majors at a national university of science and technology in southern Taiwan. Each student was asked…

  10. Idiomobile for Learners of English: A Study of Learners' Usage of a Mobile Learning Application for Learning Idioms and Collocations

    ERIC Educational Resources Information Center

    Amer, Mahmoud Atiah

    2010-01-01

    This study explored how four groups of English learners used a mobile software application developed by the researcher for learning idiomatic expressions and collocations. A total of 45 learners in the study used the application for a period of one week. Data for this study was collected from a questionnaire, the application, and follow-up…

  11. A Study of Learners' Usage of a Mobile Learning Application for Learning Idioms and Collocations

    ERIC Educational Resources Information Center

    Amer, Mahmoud

    2014-01-01

    This study explored how four groups of language learners used a mobile software application developed by the researcher for learning idiomatic expressions and collocations. A total of 45 participants in the study used the application for a period of one week. Data for this study was collected from the application, a questionnaire, and follow-up…

  12. Stretched Verb Collocations with "Give": Their Use and Translation into Spanish Using the BNC and CREA Corpora

    ERIC Educational Resources Information Center

    Molina-Plaza, Silvia; de Gregorio-Godeo, Eduardo

    2010-01-01

    Within the context of on-going research, this paper explores the pedagogical implications of contrastive analyses of multiword units in English and Spanish based on electronic corpora as a CALL resource. The main tenets of collocations from a contrastive perspective--and the points of contact and departure between both languages--are discussed…

  13. A Corpus-Driven Investigation of Chinese English Learners' Performance of Verb-Noun Collocation: A Case Study of "Ability"

    ERIC Educational Resources Information Center

    Xia, Lixin

    2013-01-01

    The paper makes a contrastive study on the performance of verb-noun collocation given by Chinese EFL learners based on the CLEC, ICLE and BNC. First, all the concordance lines with the token "ability" in the CLEC were collected and analyzed. Then, they were tagged manually in order to sort out the sentences in the verb-noun collocation…

  14. A Computational Approach to Detecting Collocation Errors in the Writing of Non-Native Speakers of English

    ERIC Educational Resources Information Center

    Futagi, Yoko; Deane, Paul; Chodorow, Martin; Tetreault, Joel

    2008-01-01

    This paper describes the first prototype of an automated tool for detecting collocation errors in texts written by non-native speakers of English. Candidate strings are extracted by pattern matching over POS-tagged text. Since learner texts often contain spelling and morphological errors, the tool attempts to automatically correct them in order to…

  15. The Effect of Form versus Meaning-Focused Tasks on the Development of Collocations among Iranian Intermediate EFL Learners

    ERIC Educational Resources Information Center

    Pishghadam, Reza; Khodadady, Ebrahim; Rad, Naeemeh Daliry

    2011-01-01

    This study attempts comprehensively to investigate the effect of form versus meaning-focused tasks on the development of collocations among Iranian Intermediate EFL learners. To this end, 65 students of Mashhad High schools in Iran were selected as the participants. A general language proficiency test of Nelson (book 2, Intermediate 200A) was used…

  16. A meshless method using radial basis functions for numerical solution of the two-dimensional KdV-Burgers equation

    NASA Astrophysics Data System (ADS)

    Zabihi, F.; Saffarian, M.

    2016-07-01

    The aim of this article is to obtain the numerical solution of the two-dimensional KdV-Burgers equation. We construct the solution by using a different approach, that is based on using collocation points. The solution is based on using the thin plate splines radial basis function, which builds an approximated solution with discretizing the time and the space to small steps. We use a predictor-corrector scheme to avoid solving the nonlinear system. The results of numerical experiments are compared with analytical solutions to confirm the accuracy and efficiency of the presented scheme.

  17. Absorption of Solar Radiation by the Cloudy Atmosphere Interpretations of Collocated Aircraft Measurements

    NASA Technical Reports Server (NTRS)

    Valero, Francisco P. J.; Cess, Robert D.; Zhang, Minghua; Pope, Shelly K.; Bucholtz, Anthony; Bush, Brett; Vitko, John, Jr.

    1997-01-01

    As part of the Atmospheric Radiation Measurement (ARM) Enhanced Shortwave Experiment (ARESE), we have obtained and analyzed measurements made from collocated aircraft of the absorption of solar radiation within the atmospheric column between the two aircraft. The measurements were taken during October 1995 at the ARM site in Oklahoma. Relative to a theoretical radiative transfer model, we find no evidence for excess solar absorption in the clear atmosphere and significant evidence for its existence in the cloudy atmosphere. This excess cloud solar absorption appears to occur in both visible (0.224-0.68 microns) and near-infrared (0.68-3.30 microns) spectral regions, although not at 0.5 microns for the visible contribution, and it is shown to be true absorption rather than an artifact of sampling errors caused by measuring three-dimensional clouds.

  18. An explicit solution to the optimal LQG problem for flexible structures with collocated rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1993-01-01

    We present a class of compensators in explicit form (not requiring numerical computer calculations) for stabilizing flexible structures with collocated rate sensors. They are based on the explicit solution, valid for both Continuum and FEM Models, of the LQG problem for minimizing mean square rate. They are robust with respect to system stability (will not destabilize modes even with mismatch of parameters), can be instrumented in state space form suitable for digital controllers, and can be specified directly from the structure modes and mode 'signature' (displacement vectors at sensor locations). Some simulation results are presented for the NASA LaRC Phase-Zero Evolutionary Model - a modal Trust model with 86 modes - showing damping ratios attainable as a function of compensator design parameters and complexity.

  19. Tropospheric refractivity and zenith path delays from least-squares collocation of meteorological and GNSS data

    NASA Astrophysics Data System (ADS)

    Wilgan, Karina; Hurter, Fabian; Geiger, Alain; Rohm, Witold; Bosy, Jarosław

    2016-08-01

    Precise positioning requires an accurate a priori troposphere model to enhance the solution quality. Several empirical models are available, but they may not properly characterize the state of troposphere, especially in severe weather conditions. Another possible solution is to use regional troposphere models based on real-time or near-real time measurements. In this study, we present the total refractivity and zenith total delay (ZTD) models based on a numerical weather prediction (NWP) model, Global Navigation Satellite System (GNSS) data and ground-based meteorological observations. We reconstruct the total refractivity profiles over the western part of Switzerland and the total refractivity profiles as well as ZTDs over Poland using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zürich. In these two case studies, profiles of the total refractivity and ZTDs are calculated from different data sets. For Switzerland, the data set with the best agreement with the reference radiosonde (RS) measurements is the combination of ground-based meteorological observations and GNSS ZTDs. Introducing the horizontal gradients does not improve the vertical interpolation, and results in slightly larger biases and standard deviations. For Poland, the data set based on meteorological parameters from the NWP Weather Research and Forecasting (WRF) model and from a combination of the NWP model and GNSS ZTDs shows the best agreement with the reference RS data. In terms of ZTD, the combined NWP-GNSS observations and GNSS-only data set exhibit the best accuracy with an average bias (from all stations) of 3.7 mm and average standard deviations of 17.0 mm w.r.t. the reference GNSS stations.

  20. A Study on the pH and conductivity of rural rainfall employing two collocated samplers

    NASA Astrophysics Data System (ADS)

    Sequeira, R.; Lai, C. C.; Peart, M. R.

    1999-02-01

    A set of about 100 daily rainfall samples were collected over a period of about one year during the 1995-1996 period using two collocated, automated samplers placed ˜4 m apart at the rural Kadoorie Agricultural Research Centre (KARC) in Hong Kong. The p H and conductivity of the rainwater were measured immediately after sample collection. There is a strong correlation between the two free hydrogen ion concentrations (R2 ≈ 0.92) and an even stronger one between the conductivities (R2 ≈ 0.99). Statistically, there is no difference at the 0.05 level of significance between the means of either the two free hydrogen ion concentrations or the two conductivities. The conductivity results suggest that the total dissolved solids in the two samplers is probably quite similar in magnitude. No relationship is observed between the free acid content and daily rainfall volume in either sampler, a result similar to that obtained in previous studies involving bulk fall at the KARC and wet fall in urban Hong Kong as a whole. A weak hyperbolic relationship exists between the rainfall volume and the conductivity, and their log-log plot indicates only a somewhat weak inverse linear relationship, with correlation coefficients of -0.60 and -0.61 for the two samplers, considered individually. Finally, the unbiased estimates of the product of rainfall volume and conductivity for the collocated samples suggest that the microscale variability (≳4 m) of the mean wet mass flux of total dissolved material in rural Hong Kong rainfall is negligible.