NASA Astrophysics Data System (ADS)
Reza, Mohammadi
2015-05-01
The aim of the present paper is to present a numerical algorithm for the time-dependent generalized regularized long wave equation with boundary conditions. We semi-discretize the continuous problem by means of the Crank-Nicolson finite difference method in the temporal direction and exponential B-spline collocation method in the spatial direction. The method is shown to be unconditionally stable. It is shown that the method is convergent with an order of . Our scheme leads to a tri-diagonal nonlinear system. This new method has lower computational cost in comparison to the Sinc-collocation method. Finally, numerical examples demonstrate the stability and accuracy of this method.
The application of cubic B-spline collocation method in impact force identification
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Chen, Xuefeng; Xue, Xiaofeng; Luo, Xinjie; Liu, Ruonan
2015-12-01
The accurate real-time characterization of impact event is vital during the life-time of a mechanical product. However, the identified impact force may seriously diverge from the real one due to the unknown noise contaminating the measured data, as well as the ill-conditioned system matrix. In this paper, a regularized cubic B-spline collocation (CBSC) method is developed for identifying the impact force time history, which overcomes the deficiency of the ill-posed problem. The cubic B-spline function by controlling the mesh size of the collocation point has the profile of a typical impact event. The unknown impact force is approximated by a set of translated cubic B-spline functions and then the original governing equation of force identification is reduced to find the coefficient of the basis function at each collocation point. Moreover, a modified regularization parameter selection criterion derived from the generalized cross validation (GCV) criterion for the truncated singular value decomposition (TSVD) is introduced for the CBSC method to determine the optimum regularization number of cubic B-spline functions. In the numerical simulation of a two degrees-of-freedom (DOF) system, the regularized CBSC method is validated under different noise levels and frequency bands of exciting forces. Finally, an impact experiment is performed on a clamped-free shell structure to confirm the performance of the regularized CBSC method. Experimental results demonstrate that the peak relative errors of impact forces based on the regularized CBSC method are below 8%, while those based on the TSVD method are approximately 30%.
NASA Astrophysics Data System (ADS)
Li, Xinxiu
2012-10-01
Physical processes with memory and hereditary properties can be best described by fractional differential equations due to the memory effect of fractional derivatives. For that reason reliable and efficient techniques for the solution of fractional differential equations are needed. Our aim is to generalize the wavelet collocation method to fractional differential equations using cubic B-spline wavelet. Analytical expressions of fractional derivatives in Caputo sense for cubic B-spline functions are presented. The main characteristic of the approach is that it converts such problems into a system of algebraic equations which is suitable for computer programming. It not only simplifies the problem but also speeds up the computation. Numerical results demonstrate the validity and applicability of the method to solve fractional differential equation.
NASA Astrophysics Data System (ADS)
Fernandes, Ryan I.; Fairweather, Graeme
2012-08-01
An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.
Quartic B-spline collocation method applied to Korteweg de Vries equation
NASA Astrophysics Data System (ADS)
Zin, Shazalina Mat; Majid, Ahmad Abd; Ismail, Ahmad Izani Md
2014-07-01
The Korteweg de Vries (KdV) equation is known as a mathematical model of shallow water waves. The general form of this equation is ut+?uux+?uxxx = 0 where u(x,t) describes the elongation of the wave at displacement x and time t. In this work, one-soliton solution for KdV equation has been obtained numerically using quartic B-spline collocation method for displacement x and using finite difference approach for time t. Two problems have been identified to be solved. Approximate solutions and errors for these two test problems were obtained for different values of t. In order to look into accuracy of the method, L2-norm and L?-norm have been calculated. Mass, energy and momentum of KdV equation have also been calculated. The results obtained show the present method can approximate the solution very well, but as time increases, L2-norm and L?-norm are also increase.
Toronto, University of
SPLINE COLLOCATION ON ADAPTIVE GRIDS AND NONRECTANGULAR DOMAINS by KitSun Ng A thesis submitted Science University of Toronto Copyright c 2005 by KitSun Ng #12; Abstract Spline Collocation on Adaptive Grids and NonRectangular Domains KitSun Ng Doctor of Philosophy Graduate Department of Computer
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
Aerodynamic influence coefficient method using singularity splines.
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Weber, J. A.; Lesferd, E. P.
1973-01-01
A new numerical formulation with computed results, is presented. This formulation combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of the loading function methods. The formulation employs a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfies all of the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise (termed 'spline'). Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral.
Numerical Method Using Cubic B-Spline for a Strongly Coupled Reaction-Diffusion System
Abbas, Muhammad; Majid, Ahmad Abd.; Md. Ismail, Ahmad Izani; Rashid, Abdur
2014-01-01
In this paper, a numerical method for the solution of a strongly coupled reaction-diffusion system, with suitable initial and Neumann boundary conditions, by using cubic B-spline collocation scheme on a uniform grid is presented. The scheme is based on the usual finite difference scheme to discretize the time derivative while cubic B-spline is used as an interpolation function in the space dimension. The scheme is shown to be unconditionally stable using the von Neumann method. The accuracy of the proposed scheme is demonstrated by applying it on a test problem. The performance of this scheme is shown by computing and error norms for different time levels. The numerical results are found to be in good agreement with known exact solutions. PMID:24427270
Aerodynamic influence coefficient method using singularity splines
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Weber, J. A.; Lesferd, E. P.
1974-01-01
A numerical lifting surface formulation, including computed results for planar wing cases is presented. This formulation, referred to as the vortex spline scheme, combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of loading function methods. The formulation employes a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfied all the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise. Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral. The current formulation uses the elementary horseshoe vortex as the basic singularity and is therefore restricted to linearized potential flow. As part of the study, a non planar development was considered, but the numerical evaluation of the lifting surface concept was restricted to planar configurations. Also, a second order sideslip analysis based on an asymptotic expansion was investigated using the singularity spline formulation.
Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation
Ismail, M. S.
2010-09-30
The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.
Comparison of Implicit Collocation Methods for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)
2001-01-01
We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.
Pseudospectral collocation methods for fourth order differential equations
NASA Technical Reports Server (NTRS)
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
The chain collocation method: A spectrally accurate calculus of forms
NASA Astrophysics Data System (ADS)
Rufat, Dzhelil; Mason, Gemma; Mullen, Patrick; Desbrun, Mathieu
2014-01-01
Preserving in the discrete realm the underlying geometric, topological, and algebraic structures at stake in partial differential equations has proven to be a fruitful guiding principle for numerical methods in a variety of fields such as elasticity, electromagnetism, or fluid mechanics. However, structure-preserving methods have traditionally used spaces of piecewise polynomial basis functions for differential forms. Yet, in many problems where solutions are smoothly varying in space, a spectral numerical treatment is called for. In an effort to provide structure-preserving numerical tools with spectral accuracy on logically rectangular grids over periodic or bounded domains, we present a spectral extension of the discrete exterior calculus (DEC), with resulting computational tools extending well-known collocation-based spectral methods. Its efficient implementation using fast Fourier transforms is provided as well.
NASA Technical Reports Server (NTRS)
Zhang, Zhimin; Tomlinson, John; Martin, Clyde
1994-01-01
In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.
ADAPTIVE ADER METHODS USING KERNEL-BASED POLYHARMONIC SPLINE WENO RECONSTRUCTION
Iske, Armin
ADAPTIVE ADER METHODS USING KERNEL-BASED POLYHARMONIC SPLINE WENO RECONSTRUCTION TERHEMEN ABOIYAR meshes is proposed. The method combines high order polyharmonic spline WENO reconstruction with high method yielding a WENO reconstruction that is stable, flexible and optimal in the associated Sobolev
Three dimensional distorted black holes: using the Galerkin-Collocation method
H. P. de Oliveira; E. L. Rodrigues
2015-02-20
We present an implementation of the Galerkin-Collocation method to determine the initial data for non-rotating distorted three dimensional black holes in the inversion and puncture schemes. The numerical method combines the key features of the Galerkin and Collocation methods which produces accurate initial data. We evaluated the ADM mass of the initial data sets, and we have provided the angular structure of the gravitational wave distribution at the initial hypersurface by evaluating the scalar $\\Psi_4$ for asymptotic observers.
A multidomain spectral collocation method for the Stokes problem
NASA Technical Reports Server (NTRS)
Landriani, G. Sacchi; Vandeven, H.
1989-01-01
A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.
Efficient mesh selection for collocation methods applied to singular BVPs
Weinmüller, Ewa B.
is implemented in our MATLAB code sbvp which is based on polynomial collocation. We prove that under realistic of the singular point t = 0, since a straightforward evaluation of the right-hand side of (1) is not possible for example from physics, see [14,27], mechanics ([10,15]), or ecology, see [17,19]. Thus, we have designed
Liu, Yi-Xin; Zhang, Hong-Dong
2014-06-14
We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces. PMID:24929368
NASA Astrophysics Data System (ADS)
Liu, Yi-Xin; Zhang, Hong-Dong
2014-06-01
We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.
Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S
2013-01-01
n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.
GRADUAL GENERALIZATION OF NAUTICAL CHART CONTOURS WITH A B-SPLINE SNAKE METHOD
New Hampshire, University of
in Partial Fulfillment of the Requirements for the Degree of Master of Science in Ocean Engineering September...................................................................................................................38 2.1.1 B-spline Curve Definition....................................................................................................................40 2.2.1 Snake Method Definition...............................................
NOKIN1D: one-dimensional neutron kinetics based on a nodal collocation method
NASA Astrophysics Data System (ADS)
Verdú, G.; Ginestar, D.; Miró, R.; Jambrina, A.; Barrachina, T.; Soler, Amparo; Concejal, Alberto
2014-06-01
The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method.
Solar Convection Simulations using a B-spline method
Thomas Hartlep; Nagi N. Mansour
2008-05-05
This report outlines the development of a B-spline--spectral numerical code for the simulation of convection flows. It allows changing the spatial resolution in all three coordinates as a function of depth, which is especially advantageous for simulations of solar convection.
Collocation methods for the solution of von-Karman dynamic non-linear plate systems
Yosibash, Zohar
Collocation methods for the solution of von-Karman dynamic non-linear plate systems Z. Yosibash a,1-plane equations) as presented in (loc. cit). Ó 2004 Elsevier Inc. All rights reserved. Keywords: von-Karman plate The von-Karman plate model is aimed at approximating the elasticity system over a three-dimensional plate
Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Halem, Milton (Technical Monitor)
2000-01-01
We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.
Chen, E Q; Lam, C F
1994-05-01
In single photon emission computed tomography (SPECT), Compton scattered photons degrade image contrast and cause erroneous regional activity quantification. A predictor-corrector and cubic spline (PCCS) method for the compensation of Compton scatter in SPECT is proposed. Using spectral information recorded at four energy windows, the PCCS method estimates scatter counts at each window and constructs the scatter spectrum with cubic spline interpolation. We have shown in simulated noise-free situations that this method provides accurate estimation of scatter fractions. A scatter correction employing PCCS method can be implemented on many existing SPECT systems without hardware modification and complicated calibration. PMID:7924268
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
Fractional spectral collocation methods for linear and nonlinear variable order FPDEs
NASA Astrophysics Data System (ADS)
Zayernouri, Mohsen; Karniadakis, George Em
2015-07-01
While several high-order methods have been developed for fractional PDEs (FPDEs) with fixed order, there are no such methods for FPDEs with field-variable order. These equations allow multiphysics simulations seamlessly, e.g. from diffusion to sub-diffusion or from wave dynamics transitioning to diffusion, by simply varying the fractional order as a function of space or time. We develop an exponentially accurate fractional spectral collocation method for solving linear/nonlinear FPDEs with field-variable order. Following the spectral theory, developed in [1] for fractional Sturm-Liouville eigenproblems, we introduce a new family of interpolants, called left-/right-sided and central fractional Lagrange interpolants. We employ the fractional derivatives of (left-/right-sided) Riemann-Liouville and Riesz type and obtain the corresponding fractional differentiation matrices by collocating the field-variable fractional orders. We solve several FPDEs including time- and space-fractional advection-equation, time- and space-fractional advection-diffusion equation, and finally the space-fractional Burgers' equation to demonstrate the performance of the method. In addition, we develop a spectral penalty method for enforcing inhomogeneous initial conditions. Our numerical results confirm the exponential-like convergence of the proposed fractional collocation methods.
A novel stochastic collocation method for uncertainty propagation in complex mechanical systems
NASA Astrophysics Data System (ADS)
Qi, WuChao; Tian, SuMei; Qiu, ZhiPing
2015-02-01
This paper presents a novel stochastic collocation method based on the equivalent weak form of multivariate function integral to quantify and manage uncertainties in complex mechanical systems. The proposed method, which combines the advantages of the response surface method and the traditional stochastic collocation method, only sets integral points at the guide lines of the response surface. The statistics, in an engineering problem with many uncertain parameters, are then transformed into a linear combination of simple functions' statistics. Furthermore, the issue of determining a simple method to solve the weight-factor sets is discussed in detail. The weight-factor sets of two commonly used probabilistic distribution types are given in table form. Studies on the computational accuracy and efforts show that a good balance in computer capacity is achieved at present. It should be noted that it's a non-gradient and non-intrusive algorithm with strong portability. For the sake of validating the procedure, three numerical examples concerning a mathematical function with analytical expression, structural design of a straight wing, and flutter analysis of a composite wing are used to show the effectiveness of the guided stochastic collocation method.
Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
Inkpen, Diana
and expected means, scaled by the variance of the data, and tells us how likely one is to get a sample Surveyed · Selection of Collocations by Frequency · Selection of Collocation based on Mean and Variance.e., no collocation), then the variance/sample deviation will be high. #12;9 Mean and Variance (II) · n = number
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation
NASA Technical Reports Server (NTRS)
Jones, Brandon A.; Anderson, Rodney L.
2012-01-01
Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.
Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules
1999-01-01
In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.
Legendre spectral-collocation method for solving some types of fractional optimal control problems.
Sweilam, Nasser H; Al-Ajami, Tamer M
2015-05-01
In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh-Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937
NASA Astrophysics Data System (ADS)
Tirani, M. Dadkhah; Sohrabi, F.; Almasieh, H.; Kajani, M. Tavassoli
2015-10-01
In this paper, a collocation method based on Taylor polynomials is developed for solving systems linear differential-difference equations with variable coefficients defined in large intervals. By using Taylor polynomials and their properties in obtaining operational matrices, the solution of the differential-difference equation system with given conditions is reduced to the solution of a system of linear algebraic equations. We first divide the large interval into M equal subintervals and then Taylor polynomials solutions are obtained in each interval, separately. Some numerical examples are given and results are compared with analytical solutions and other techniques in the literature to demonstrate the validity and applicability of the proposed method.
Numerical Algorithm Based on Haar-Sinc Collocation Method for Solving the Hyperbolic PDEs
Javadi, H. H. S.; Navidi, H. R.
2014-01-01
The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295
The assessing method of complete tooth form error based on the spline function
NASA Astrophysics Data System (ADS)
Huang, Fugui; Cui, Changcai; Zhang, Rencheng
2006-11-01
Having analyzed the shortcoming of current measurement method of involute cylinder gear wheel tooth form error and the reason of error, measurement theory and implementation method of the complete tooth form error of the involute cylindrical gear have been proposed; mathematical model of fitting actual tooth curve based on cubic spline function has been derived and the determination of boundary condition has been given; feasibility of measurement and evaluation method for complete tooth form error has been verified by experiment.
Sinc-Chebyshev Collocation Method for a Class of Fractional Diffusion-Wave Equations
Mao, Zhi; Xiao, Aiguo; Yu, Zuguo; Shi, Long
2014-01-01
This paper is devoted to investigating the numerical solution for a class of fractional diffusion-wave equations with a variable coefficient where the fractional derivatives are described in the Caputo sense. The approach is based on the collocation technique where the shifted Chebyshev polynomials in time and the sinc functions in space are utilized, respectively. The problem is reduced to the solution of a system of linear algebraic equations. Through the numerical example, the procedure is tested and the efficiency of the proposed method is confirmed. PMID:24977177
Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method
NASA Technical Reports Server (NTRS)
Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.
1997-01-01
A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.
NASA Technical Reports Server (NTRS)
Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.
1994-01-01
Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.
An iterative finite-element collocation method for parabolic problems using domain decomposition
Curran, M.C.
1992-01-01
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
An iterative finite-element collocation method for parabolic problems using domain decomposition
Curran, M.C.
1992-11-01
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
Penalized Splines and Small Area Jean Opsomer
regression methods are available · Kernel and local polynomial methods · Splines smoothing splines-spline is a nonparametric regression method: can fit very large classes of functions adaptive to local features;2/44 Back Close Outline 1. Introduction 2. Nonparametric regression using penalized splines 3. Small area
Webster, Clayton; Tempone, Raul; Nobile, Fabio
2007-12-01
This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.
Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows.
Hejranfar, Kazem; Hajihassanpour, Mahya
2015-01-01
In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential convergence is achieved rather than polynomial rates. The solution methodology proposed, the CCSLBM, is also extended to three dimensions and a 3D regularized cavity is simulated; the corresponding results are presented and validated. Indications are that the CCSLBM developed and applied herein is robust, efficient, and accurate for computing 2D and 3D low-speed flows. Note also that high-accuracy solutions obtained by applying the CCSLBM can be used as benchmark solutions for the assessment of other LBM-based flow solvers. PMID:25679733
The adaptive wavelet collocation method and its application in front simulation
NASA Astrophysics Data System (ADS)
Huang, Wenyu; Wu, Rongsheng; Fang, Juan
2010-05-01
The adaptive wavelet collocation method (AWCM) is a variable grid technology for solving partial differential equations (PDEs) with high singularities. Based on interpolating wavelets, the AWCM adapts the grid so that a higher resolution is automatically attributed to domain regions with high singularities. Accuracy problems with the AWCM have been reported in the literature, and in this paper problems of efficiency with the AWCM are discussed in detail through a simple one-dimensional (1D) nonlinear advection equation whose analytic solution is easily obtained. A simple and efficient implementation of the AWCM is investigated. Through studying the maximum errors at the moment of frontogenesis of the 1D nonlinear advection equation with different initial values and a comparison with the finite difference method (FDM) on a uniform grid, the AWCM shows good potential for modeling the front efficiently. The AWCM is also applied to a two-dimensional (2D) unbalanced frontogenesis model in its first attempt at numerical simulation of a meteorological front. Some important characteristics about the model are revealed by the new scheme.
High-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1975-01-01
The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.
Spline method for correcting baseplane distortions in two-dimensional NMR spectra
NASA Astrophysics Data System (ADS)
Zolnai, Zsolt; Macura, Slobodan; Markley, John L.
A two-dimensional spline method for correcting baseplane distortions in two-dimensional NMR spectra has been developed. Practical application of the method (selection of the network of reference points, calculation of new values for corrected points, prevention of distortions by diagonal ridges, etc.) is illustrated with two-dimensional cross-relaxation spectra of a small protein (turkey ovomucoid third domain, Mr 6062). Baseplane flattening is particularly important for quantitative analysis of 2D NOE and ROE spectra where baseplane distortion can severely interfere with the measurement of cross-peak volumes; it may also eliminate other artifacts in 2D NMR spectra, such as phase distortion, t1 and t2 ridges, and axial peaks. The procedure significantly improves the visual appearance of 2D spectra and makes them more amenable to automated computer calculation of spectral parameters.
NASA Astrophysics Data System (ADS)
Karkar, Sami; Cochelin, Bruno; Vergez, Christophe
2014-06-01
The high-order purely frequency-based harmonic balance method (HBM) presented by Cochelin and Vergez (2009) [1] and extended by Karkar et al. (2013) [2] now allows to follow the periodic solutions of regularized non-smooth systems (stiff systems). This paper compares its convergence property to a reference method in applied mathematics: orthogonal collocation with piecewise polynomials. A first test is conducted on a nonlinear smooth 2 degree-of-freedom spring mass system, showing better convergence of the HBM. The second test is conducted on a one degree-of-freedom vibro-impact system with a very stiff regularization of the impact law. The HBM continuation of the nonlinear mode was found to be very robust, even with a very large number of harmonics. Surprisingly, the HBM was found to have a better convergence than the collocation method for this vibro-impact system. absolute threshold on the norm of the residue for the Newton-Raphson corrector: ?NR=10-9 (the residue norm is checked at the end of each step, and correction is carried out only if necessary), ANM series threshold used for step length estimation: ?ANM=10-12, ANM series order: Nseries=20. The choice of a small correction threshold ensures that the accuracy of a solution is mainly dependent on the accuracy of the discretization method, and not on that of the solver of the quadratic problem. Similarly, the choice of an even smaller ANM threshold ensures that the approximation at the end of each step is accurate enough so that no correction is usually needed at the beginning of the next step. Finally, the choice of the series order is arbitrary and mainly influences the step length.
Butcher, Eric A.
Free Vibration Analysis of Kirchoff Plates with Damaged Boundaries by the Chebyshev Collocation for the free vibration analysis of slender Kirchoff plates with both mixed and damaged boundaries of the natural vibration frequencies with respect to the severity of the damaged boundary. Specifically
NASA Astrophysics Data System (ADS)
Kajani, M. Tavassoli; Gholampoor, I.
2015-10-01
The purpose of this study is to present a new direct method for the approximate solution and approximate derivatives up to order k to the solution for kth-order Volterra integro-differential equations with a regular kernel. This method is based on the approximation by shifting the original problem into a sequence of subintervals. A Legendre-Gauss-Lobatto collocation method is proposed to solving the Volterra integro-differential equation. Numerical examples show that the approximate solutions have a good degree of accuracy.
NASA Astrophysics Data System (ADS)
Chen, Shang-Shang; Li, Ben-Wen
2014-12-01
A collocation spectral domain decomposition method (CSDDM) based on the influence matrix technique is developed to solve radiative transfer problems within a participating medium of 2D partitioned domains. In this numerical approach, the spatial domains of interest are decomposed into rectangular sub-domains. The radiative transfer equation (RTE) in each sub-domain is angularly discretized by the discrete ordinates method (DOM) with the SRAPN quadrature scheme and then is solved by the CSDDM directly. Three test geometries that include square enclosure and two enclosures with one baffle and one centered obstruction are used to validate the accuracy of the developed method and their numerical results are compared to the data obtained by other researchers. These comparisons indicate that the CSDDM has a good accuracy for all solutions. Therefore this method can be considered as a useful approach for the solution of radiative heat transfer problems in 2D partitioned domains.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
Ray-tracing method for creeping waves on arbitrarily shaped nonuniform rational B-splines surfaces.
Chen, Xi; He, Si-Yuan; Yu, Ding-Feng; Yin, Hong-Cheng; Hu, Wei-Dong; Zhu, Guo-Qiang
2013-04-01
An accurate creeping ray-tracing algorithm is presented in this paper to determine the tracks of creeping waves (or creeping rays) on arbitrarily shaped free-form parametric surfaces [nonuniform rational B-splines (NURBS) surfaces]. The main challenge in calculating the surface diffracted fields on NURBS surfaces is due to the difficulty in determining the geodesic paths along which the creeping rays propagate. On one single parametric surface patch, the geodesic paths need to be computed by solving the geodesic equations numerically. Furthermore, realistic objects are generally modeled as the union of several connected NURBS patches. Due to the discontinuity of the parameter between the patches, it is more complicated to compute geodesic paths on several connected patches than on one single patch. Thus, a creeping ray-tracing algorithm is presented in this paper to compute the geodesic paths of creeping rays on the complex objects that are modeled as the combination of several NURBS surface patches. In the algorithm, the creeping ray tracing on each surface patch is performed by solving the geodesic equations with a Runge-Kutta method. When the creeping ray propagates from one patch to another, a transition method is developed to handle the transition of the creeping ray tracing across the border between the patches. This creeping ray-tracing algorithm can meet practical requirements because it can be applied to the objects with complex shapes. The algorithm can also extend the applicability of NURBS for electromagnetic and optical applications. The validity and usefulness of the algorithm can be verified from the numerical results. PMID:23595326
Singh, Nikhil; Vialard, François-Xavier; Niethammer, Marc
2015-10-01
This paper develops a method for higher order parametric regression on diffeomorphisms for image regression. We present a principled way to define curves with nonzero acceleration and nonzero jerk. This work extends methods based on geodesics which have been developed during the last decade for computational anatomy in the large deformation diffeomorphic image analysis framework. In contrast to previously proposed methods to capture image changes over time, such as geodesic regression, the proposed method can capture more complex spatio-temporal deformations. We take a variational approach that is governed by an underlying energy formulation, which respects the nonflat geometry of diffeomorphisms. Such an approach of minimal energy curve estimation also provides a physical analogy to particle motion under a varying force field. This gives rise to the notion of the quadratic, the cubic and the piecewise cubic splines on the manifold of diffeomorphisms. The variational formulation of splines also allows for the use of temporal control points to control spline behavior. This necessitates the development of a shooting formulation for splines. The initial conditions of our proposed shooting polynomial paths in diffeomorphisms are analogous to the Euclidean polynomial coefficients. We experimentally demonstrate the effectiveness of using the parametric curves both for synthesizing polynomial paths and for regression of imaging data. The performance of the method is compared to geodesic regression. PMID:25980676
Boyd, T.L.
1995-10-01
Low- and moderate-temperature (20{degrees}C to 150{degrees}C) geothermal resources are widely distributed throughout the western and central United States. Numerous resources occur in the areas indicated in Figure 1, with individual reservoir areas one to ten square miles in extent. In the northern Great Plains, major aquifers with fluid temperatures exceeding 50{degrees}C extend in a continuous manner for thousands of square miles. Geothermal resources also occur at certain locations in the east. The last major effort in assessing the national potential of low-temperature geothermal resources occurred in the early 1980s. Since that time, substantial resource information has been gained through drilling hydrologic, environmental, petroleum and geothermal projects; but there has been no significant effort to update information on low-temperature geothermal resources. While there has been a substantial increase (49%) in direct-heat (excluding geothermal heat pumps) utilization during the last decade, the large resource base (266 Quads, the U.S. uses about 80 Quads/yr) is greatly under-utilized. Since the thermal energy extracted from these resources must be used near the reservoir, collocation of the resource and user is required.
Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora
2009-01-01
This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550
NASA Astrophysics Data System (ADS)
Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.
2015-01-01
Reliable quantification of air-biosphere exchange flux of elemental mercury vapor (Hg0) is crucial for understanding the global biogeochemical cycle of mercury. However, there has not been a standard analytical protocol for flux quantification, and little attention has been devoted to characterize the temporal variability and comparability of fluxes measured by different methods. In this study, we deployed a collocated set of micrometeorological (MM) and dynamic flux chamber (DFC) measurement systems to quantify Hg0 flux over bare soil and low standing crop in an agricultural field. The techniques include relaxed eddy accumulation (REA), modified Bowen ratio (MBR), aerodynamic gradient (AGM) as well as dynamic flux chambers of traditional (TDFC) and novel (NDFC) designs. The five systems and their measured fluxes were cross-examined with respect to magnitude, temporal trend and correlation with environmental variables. Fluxes measured by the MM and DFC methods showed distinct temporal trends. The former exhibited a highly dynamic temporal variability while the latter had much more gradual temporal features. The diurnal characteristics reflected the difference in the fundamental processes driving the measurements. The correlations between NDFC and TDFC fluxes and between MBR and AGM fluxes were significant (R>0.8, p<0.05), but the correlation between DFC and MM fluxes were from weak to moderate (R=0.1-0.5). Statistical analysis indicated that the median of turbulent fluxes estimated by the three independent MM techniques were not significantly different. Cumulative flux measured by TDFC is considerably lower (42% of AGM and 31% of MBR fluxes) while those measured by NDFC, AGM and MBR were similar (<10% difference). This suggests that incorporating an atmospheric turbulence property such as friction velocity for correcting the DFC-measured flux effectively bridged the gap between the Hg0 fluxes measured by enclosure and MM techniques. Cumulated flux measured by REA was ~60% higher than the gradient-based fluxes. Environmental factors have different degrees of impacts on the fluxes observed by different techniques, possibly caused by the underlying assumptions specific to each individual method. Recommendations regarding the application of flux quantification methods were made based on the data obtained in this study.
Extracting Verb-Noun Collocations from Text Jia Yan Jian
names, idioms, and terminology. Automatic extraction of monolingual and bilingual collocationsExtracting Verb-Noun Collocations from Text Jia Yan Jian Department of Computer Science National, we describe a new method for extracting monolingual collocations. The method is based on statistical
Number systems, ?-splines and refinement
NASA Astrophysics Data System (ADS)
Zube, Severinas
2004-12-01
This paper is concerned with the smooth refinable function on a plane relative with complex scaling factor . Characteristic functions of certain self-affine tiles related to a given scaling factor are the simplest examples of such refinable function. We study the smooth refinable functions obtained by a convolution power of such charactericstic functions. Dahlke, Dahmen, and Latour obtained some explicit estimates for the smoothness of the resulting convolution products. In the case ?=1+i, we prove better results. We introduce ?-splines in two variables which are the linear combination of shifted basic functions. We derive basic properties of ?-splines and proceed with a detailed presentation of refinement methods. We illustrate the application of ?-splines to subdivision with several examples. It turns out that ?-splines produce well-known subdivision algorithms which are based on box splines: Doo-Sabin, Catmull-Clark, Loop, Midedge and some -subdivision schemes with good continuity. The main geometric ingredient in the definition of ?-splines is the fundamental domain (a fractal set or a self-affine tile). The properties of the fractal obtained in number theory are important and necessary in order to determine two basic properties of ?-splines: partition of unity and the refinement equation.
BaniHani, Suleiman; De, Suvranu
2009-01-01
In this paper we develop the Point Collocation-based Method of Finite Spheres (PCMFS) to simulate the viscoelastic response of soft biological tissues and evaluate the effectiveness of model order reduction methods such as modal truncation, Hankel optimal model and truncated balanced realization techniques for PCMFS. The PCMFS was developed in [1] as a physics-based technique for real time simulation of surgical procedures. It is a meshfree numerical method in which discretization is performed using a set of nodal points with approximation functions compactly supported on spherical subdomains centered at the nodes. The point collocation method is used as the weighted residual technique where the governing differential equations are directly applied at the nodal points. Since computational speed has a significant role in simulation of surgical procedures, model order reduction methods have been compared for relative gains in efficiency and computational accuracy. Of these methods, truncated balanced realization results in the highest accuracy while modal truncation results in the highest efficiency. PMID:20300494
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
NASA Astrophysics Data System (ADS)
Sun, Ya-Song; Ma, Jing; Li, Ben-Wen
2015-10-01
A collocation spectral method (CSM) is developed to solve the fin heat transfer in triangular, trapezoidal, exponential, concave parabolic, and convex geometries. In the thermal process of fin heat transfer, fin dissipates heat to environment by convection and radiation; internal heat generation, thermal conductivity, heat transfer coefficient, and surface emissivity are functions of temperature; ambient fluid temperature and radiative sink temperature are considered to be nonzero. The temperature in the fin is approximated by Chebyshev polynomials and spectral collocation points. Thus, the differential form of energy equation is transformed into the matrix form of algebraic equation. In order to test efficiency and accuracy of the developed method, five types of convective-radiative fins are examined. Results obtained by the CSM are assessed by comparing available results in references. These comparisons indicate that the CSM can be recommended as a good option to simulate and predict thermal performance of the convective-radiative fins.
Small Area Estimation Using Penalized Spline Regression
;2/28 Back Close Outline 1. Introduction: Northeastern Lakes Survey 2. Methods (a) Nonparametric regression model #12;7/28 Back Close 2.1) Methods: Nonparametric Regression Using Penalized Splines Regression1/28 Back Close Small Area Estimation Using Penalized Spline Regression Jean Opsomer Iowa State
Small Area Estimation Using Penalized Spline Regression
;2/29 Back Close Outline 1. Introduction: Northeastern Lakes Survey 2. Methods (a) Nonparametric regression for parametric shape #12;7/29 Back Close 2.1) Methods: Nonparametric Regression Using Penalized Splines1/29 Back Close Small Area Estimation Using Penalized Spline Regression Jean Opsomer Iowa State
Small Area Estimation Using Penalized Spline Regression
;2/31 Back Close Outline 1. Introduction: Northeastern Lakes Survey 2. Methods (a) Nonparametric regression Close 2.1) Methods: Nonparametric Regression Using Penalized Splines Regression model yi = m(xi) + i1/31 Back Close Small Area Estimation Using Penalized Spline Regression Jean Opsomer Iowa State
Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D
2012-09-01
Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional MCMC sim- ulations. The computational efficiency is expected to be more beneficial to more computational expensive groundwater problems.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. PMID:22376200
NASA Astrophysics Data System (ADS)
Ng, Pin T.; Maechler, Martin
2015-05-01
COBS (COnstrained B-Splines), written in R, creates constrained regression smoothing splines via linear programming and sparse matrices. The method has two important features: the number and location of knots for the spline fit are established using the likelihood-based Akaike Information Criterion (rather than a heuristic procedure); and fits can be made for quantiles (e.g. 25% and 75% as well as the usual 50%) in the response variable, which is valuable when the scatter is asymmetrical or non-Gaussian. This code is useful for, for example, estimating cluster ages when there is a wide spread in stellar ages at a chosen absorption, as a standard regression line does not give an effective measure of this relationship.
Lin, Guang; Elizondo, Marcelo A.; Lu, Shuai; Wan, Xiaoliang
2014-01-01
This paper proposes a probabilistic collocation method (PCM) to quantify the uncertainties with dynamic simulations in power systems. The appraoch was tested on a single-machine-infinite-bus system and the over 15,000 -bus Western Electricity Coordinating Council (WECC) system. Comparing to classic Monte-Carlo (MC) method, the proposed PCM applies the Smolyak algorithm to reduce the number of simulations that have to be performed. Therefore, the computational cost can be greatly reduced using PCM. The algorithm and procedures are described in the paper. Comparison was made with MC method on the single machine as well as the WECC system. The simulation results shows that using PCM only a small number of sparse grid points need to be sampled even when dealing with systems with a relatively large number of uncertain parameters. PCM is, therefore, computationally more efficient than MC method.
Lin, Guang; Tartakovsky, Alexandre M.
2009-05-01
In this study, a probabilistic collocation method (PCM) on sparse grids was used to solve stochastic equations describing flow and transport in three-dimensional in saturated, randomly heterogeneous porous media. Karhunen-Lo\\`{e}ve (KL) decomposition was used to represent the three-dimensional log hydraulic conductivity $Y=\\ln K_s$. The hydraulic head $h$ and average pore-velocity $\\bf v$ were obtained by solving the three-dimensional continuity equation coupled with Darcy's law with random hydraulic conductivity field. The concentration was computed by solving a three-dimensional stochastic advection-dispersion equation with stochastic average pore-velocity $\\bf v$ computed from Darcy's law. PCM is an extension of the generalized polynomial chaos (gPC) that couples gPC with probabilistic collocation. By using the sparse grid points, PCM can handle a random process with large number of random dimensions, with relatively lower computational cost, compared to full tensor products. Monte Carlo (MC) simulations have also been conducted to verify accuracy of the PCM. By comparing the MC and PCM results for mean and standard deviation of concentration, it is evident that the PCM approach is computational more efficient than Monte Carlo simulations. Unlike the conventional moment-equation approach, there is no limitation on the amplitude of random perturbation in PCM. Furthermore, PCM on sparse grids can efficiently simulate solute transport in randomly heterogeneous porous media with large variances.
Adaptive Basis Sampling for Smoothing Splines
Zhang, Nan
2015-08-03
Smoothing splines provide flexible nonparametric regression estimators. Penalized likelihood method is adopted when responses are from exponential families and multivariate models are constructed with certain analysis of variance decomposition...
Computer program for fitting low-order polynomial splines by method of least squares
NASA Technical Reports Server (NTRS)
Smith, P. J.
1972-01-01
FITLOS is computer program which implements new curve fitting technique. Main program reads input data, calls appropriate subroutines for curve fitting, calculates statistical analysis, and writes output data. Method was devised as result of need to suppress noise in calibration of multiplier phototube capacitors.
Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael
2000-04-11
A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy search algorithm to find and delete redundant knots based on the estimation of a weight associated with each basis vector. The overall algorithm iterates by inserting and deleting knots and end up with much fewer knots than pixels to represent the object, while the estimation error is within a certain tolerance. Thus, an efficient reconstruction can be obtained which significantly reduces the complexity of the problem. In this thesis, the adaptive B-Spline method is applied to a cross-well tomography problem. The problem comes from the application of finding underground pollution plumes. Cross-well tomography method is applied by placing arrays of electromagnetic transmitters and receivers along the boundaries of the interested region. By utilizing inverse scattering method, a linear inverse model is set up and furthermore the adaptive B-Spline method described above is applied. The simulation results show that the B-Spline method reduces the dimensional complexity by 90%, compared with that o f a pixel-based method, and decreases time complexity by 50% without significantly degrading the estimation.
Results on Meshless Collocation Techniques , and R. Schaback
Schaback, Robert
Results on Meshless Collocation Techniques L. Ling , R. Opfer , and R. Schaback , March 15, 2004 applications, there were no proven results so far on the unsymmetric meshless collocation method for solvingsafe in general, we prove asymptotic fea- sibility for a generalized variant using separated trial and test spaces
NASA Astrophysics Data System (ADS)
Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.
2014-09-01
Reliable quantification of air-biosphere exchange flux of elemental mercury vapor (Hg0) is crucial for understanding global biogeochemical cycle of mercury. However, there has not been a standard analytical protocol for flux quantification, and little attention has been devoted to characterize the temporal variability and comparability of fluxes measured by different methods. In this study, we deployed a collocated set of micro-meteorological (MM) and enclosure measurement systems to quantify Hg0 flux over bare soil and low standing crop in an agricultural field. The techniques include relaxed eddy accumulation (REA), modified Bowen-ratio (MBR), aerodynamic gradient (AGM) as well as dynamic flux chambers of traditional (TDFC) and novel (NDFC) designs. The five systems and their measured fluxes were cross-examined with respect to magnitude, temporal trend and sensitivity to environmental variables. Fluxes measured by the MM and DFC methods showed distinct temporal trends. The former exhibited a highly dynamic temporal variability while the latter had much gradual temporal features. The diurnal characteristics reflected the difference in the fundamental processes driving the measurements. The correlations between NDFC and TDFC fluxes and between MBR and AGM fluxes were significant (R > 0.8, p < 0.05), but the correlation between DFC and MM instantaneous fluxes were from weak to moderate (R = 0.1-0.5). Statistical analysis indicated that the median of turbulent fluxes estimated by the three independent MM-techniques were not significantly different. Cumulative flux measured by TDFC is considerably lower (42% of AGM and 31% of MBR fluxes) while those measured by NDFC, AGM and MBR were similar (< 10% difference). This implicates that the NDFC technique, which accounts for internal friction velocity, effectively bridged the gap in measured Hg0 flux compared to MM techniques. Cumulated flux measured by REA was ~60% higher than the gradient-based fluxes. Environmental factors have different degrees of impacts on the fluxes observed by different techniques, possibly caused by the underlying assumptions specific to each individual method. Recommendations regarding the application of flux quantification methods were made based on the data obtained in this study.
Kosinka, Ji?i; Sabin, Malcolm A.; Dodgson, Neil A.
2014-09-03
. Introduction Splines have their roots in the lofting technique used in the shipbuilding and aircraft industries throughout the first half of the 20th century. The first mathematical reference to the notion of splines is accredited to the work of Schoenberg [1... discontinuous function) without having to remesh. Recently, due to the popularity of Isogeometric Analysis (IgA for short; see [15]), such modifications are ever more important. These modifications, however, break the partition of unity property and cannot...
Spherical splines for scalp potential and current density mapping.
Perrin, F; Pernier, J; Bertrand, O; Echallier, J F
1989-02-01
Description of mapping methods using spherical splines, both to interpolate scalp potentials (SPs), and to approximate scalp current densities (SCDs). Compared to a previously published method using thin plate splines, the advantages are a very simple derivation of the SCD approximation, faster computing times, and greater accuracy in areas with few electrodes. PMID:2464490
NASA Astrophysics Data System (ADS)
Liu, Qianlong
2005-11-01
One of the most practically important problems of computational aero-acoustics is the efficient and accurate calculation of flows around solid obstacles of arbitrary surfaces. To simulate flows in complex domains, we combine two mathematical approaches, the Adaptive Wavelet Collocation Method, which tackles the problem of efficiently resolving localized flow structures in complicated geometries, and the Brinkman Penalization Method, which addresses the problems of efficiently implementing arbitrary complex solid boundaries. Through them, we can resolve and automatically track all the important flow structures on the computational grid that automatically adapts to the solution. To obtain accurate long-time flow simulation and accurately predict far-field acoustics using a relatively small computational domain, appropriate artificial boundary conditions are critical to minimize the contamination by the otherwise reflected spurious waves. Once the near-field accurate simulation is available, Ffowcs Williams and Hawkings (FWH) equations are used to predict the far-field acoustics. The method is applied to a number of acoustics benchmark problems and the results are compared with both the exact and the direct numerical simulation solutions.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
Statistical NLP: Lecture 7 Collocations
Inkpen, Diana
means, scaled by the variance of the data, and tells us how likely one is to get a sample of that mean Surveyed · Selection of Collocations by Frequency · Selection of Collocation based on Mean and Variance), then the variance/sample deviation will be high. #12;9 Mean and Variance (II) · n = number of times two words
Hypothesis Testing in Smoothing Spline Models Anna Liu and Yuedong Wang
Wang, Yuedong
1993), local polynomial regression (Cleveland and Devlin 1988) and smoothing spline. In this paper we of a parametric regression function using smooth- ing spline models. Some tests such as the locally most powerful. 1 #12;1 Introduction As a popular nonparametric regression method, spline smoothing has attracted
SURVIVAL ESTIMATION USING SPLINES
A non parametric maximum likelihood procedure is given for estimating the survivor function from right-censored data. t approximates the hazard rate by a simple function such as a spline, with different approximations yielding different estimators. pecial case is that proposed by...
NASA Astrophysics Data System (ADS)
Zhu, W.; Sommar, J.; Lin, C.-J.; Feng, X.
2015-02-01
Dynamic flux chambers (DFCs) and micrometeorological (MM) methods are extensively deployed for gauging air-surface Hg0 gas exchange. However, a systematic evaluation of the precision of the contemporary Hg0 flux quantification methods is not available. In this study, the uncertainty in Hg0 flux measured by relaxed eddy accumulation (REA) method, aerodynamic gradient method (AGM), modified Bowen-ratio (MBR) method, as well as DFC of traditional (TDFC) and novel (NDFC) designs is assessed using a robust data-set from two field intercomparison campaigns. The absolute precision in Hg0 concentration difference (? C) measurements is estimated at 0.064 ng m-3 for the gradient-based MBR and AGM system. For the REA system, the parameter is Hg0 concentration (C) dependent at 0.069+0.022C. 57 and 62% of the individual vertical gradient measurements were found to be significantly different from zero during the campaigns, while for the REA-technique the percentage of significant observations was lower. For the chambers, non-significant fluxes are confined to a few nighttime periods with varying ambient Hg0 concentration. Relative bias for DFC-derived fluxes is estimated to be ~ ±10%, and ~ 85% of the flux bias are within ±2 ng m-2 h-1 in absolute term. The DFC flux bias follows a diurnal cycle, which is largely dictated by temperature controls on the enclosed volume. Due to contrasting prevailing micrometeorological conditions, the relative uncertainty (median) in turbulent exchange parameters differs by nearly a factor of two between the campaigns, while that in ? C measurements is fairly stable. The estimated flux uncertainties for the triad of MM-techniques are 16-27, 12-23 and 19-31% (interquartile range) for the AGM, MBR and REA method, respectively. This study indicates that flux-gradient based techniques (MBR and AGM) are preferable to REA in quantifying Hg0 flux over ecosystems with low vegetation height. A limitation of all Hg0 flux measurement systems investigated is their incapability to obtain synchronous samples for the calculation of ? C. This reduces the precision of flux quantification, particularly the MM-systems under non-stationarity of ambient Hg0 concentration. For future applications, it is recommended to accomplish ? C derivation from simultaneous collected samples.
Optimized Spline Interpolation
Madani, Ramtin; Amini, Arash; Marvasti, Farrokh
2011-01-01
In this paper, we investigate the problem of designing compact support interpolation kernels for a given class of signals. By using calculus of variations, we simplify the optimization problem from an infinite nonlinear problem to a finite dimensional linear case, and then find the optimum compact support function that best approximates a given filter in the least square sense (l2 norm). The benefit of compact support interpolants is the low computational complexity in the interpolation process while the optimum compact support interpolant gaurantees the highest achivable Signal to Noise Ratio (SNR). Our simulation results confirm the superior performance of the proposed splines compared to other conventional compact support interpolants such as cubic spline.
Spline screw payload fastening system
NASA Technical Reports Server (NTRS)
Vranish, John M. (inventor)
1993-01-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the first type electrical connector up to the complementary second type connector for interconnection therewith.
Bayesian B-spline mapping for dynamic quantitative traits.
Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong
2012-04-01
Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located. PMID:22624568
On Curved Simplicial Elements and Best Quadratic Spline Approximation for
Hamann, Bernd
On Curved Simplicial Elements and Best Quadratic Spline Approximation for Hierarchical Data a method for hierarchical data approximation using curved quadratic simplicial elements for domain- cial elements make possible a better representation of curved geometry, domain boundaries
Fitting multidimensional splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Multi-quadric collocation model of horizontal crustal movement
NASA Astrophysics Data System (ADS)
Chen, G.; Zeng, A. M.; Ming, F.; Jing, Y. F.
2015-11-01
To establish the horizontal crustal movement velocity field of the Chinese mainland, a Hardy multi-quadric fitting model and collocation are usually used, but the kernel function, nodes, and smoothing factor are difficult to determine in the Hardy function interpolation, and in the collocation model the covariance function of the stochastic signal must be carefully constructed. In this paper, a new combined estimation method for establishing the velocity field, based on collocation and multi-quadric equation interpolation, is presented. The crustal movement estimation simultaneously takes into consideration an Euler vector as the crustal movement trend and the local distortions as the stochastic signals, and a kernel function of the multi-quadric fitting model substitutes for the covariance function of collocation. The velocities of a set of 1070 reference stations were obtained from the Crustal Movement Observation Network of China (CMONOC), and the corresponding velocity field established using the new combined estimation method. A total of 85 reference stations were used as check points, and the precision in the north and east directions was 1.25 and 0.80 mm yr-1, respectively. The result obtained by the new method corresponds with the collocation method and multi-quadric interpolation without requiring the covariance equation for the signals.
NASA Technical Reports Server (NTRS)
Fang, Ming; Bowin, Carl
1992-01-01
To construct Venus' gravity disturbance field (or gravity anomaly) with the spacecraft-observer line of site (LOS) acceleration perturbation data, both a global and a local approach can be used. The global approach, e.g., spherical harmonic coefficients, and the local approach, e.g., the integral operator method, based on geodetic techniques are generally not the same, so that they must be used separately for mapping long wavelength features and short wavelength features. Harmonic spline, as an interpolation and extrapolation technique, is intrinsically flexible to both global and local mapping of a potential field. Theoretically, it preserves the information of the potential field up to the bound by sampling theorem regardless of whether it is global or local mapping, and is never bothered with truncation errors. The improvement of harmonic spline methodology for global mapping is reported. New basis functions, a singular value decomposition (SVD) based modification to Parker & Shure's numerical procedure, and preliminary results are presented.
Theory, computation, and application of exponential splines
NASA Technical Reports Server (NTRS)
Mccartin, B. J.
1981-01-01
A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.
Collocations: A Neglected Variable in EFL.
ERIC Educational Resources Information Center
Farghal, Mohammed; Obiedat, Hussein
1995-01-01
Addresses the issue of collocations as an important and neglected variable in English-as-a-Foreign-Language classes. Two questionnaires, in English and Arabic, involving common collocations relating to food, color, and weather were administered to English majors and English language teachers. Results show both groups deficient in collocations. (36…
Flight Testing a Real Time Implementation of a UAV Path Planner Using Direct Collocation
Flight Testing a Real Time Implementation of a UAV Path Planner Using Direct Collocation Brian R University, University Park, PA 16802 Flight tests of a path planning algorithm using direct collocation aerial vehicle for these tests. The method plans a path that maximizes the time a target is in view
Analysis of chromatograph systems using orthogonal collocation
NASA Technical Reports Server (NTRS)
Woodrow, P. T.
1974-01-01
Research is generating fundamental engineering design techniques and concepts for the chromatographic separator of a chemical analysis system for an unmanned, Martian roving vehicle. A chromatograph model is developed which incorporates previously neglected transport mechanisms. The numerical technique of orthogonal collocation is studied. To establish the utility of the method, three models of increasing complexity are considered, the latter two being limiting cases of the derived model: (1) a simple, diffusion-convection model; (2) a rate of adsorption limited, inter-intraparticle model; and (3) an inter-intraparticle model with negligible mass transfer resistance.
Locating CVBEM collocation points for steady state heat transfer problems
NASA Astrophysics Data System (ADS)
Hromadka, T. V., II
1985-06-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst.
NASA Astrophysics Data System (ADS)
Pan, M.; Zhan, W.; Fisher, C. K.; Crow, W. T.; Wood, E. F.
2014-12-01
This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the multiple collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is slightly different from the original inner product solution but easier to extend to multiple collocation cases. The Pythagorean solution is fully equivalent to the original inner product solution for the triple collocation case. The multiple collocation turns out to be an over-constrained problem and a least squared solution is presented. As the most critical assumption of uncorrelated errors will almost for sure fail in multiple collocation problems, we propose to divide the source estimates into structural categories and treat the structural and non-structural errors separately. Such error separation allows the source estimates to have their structural errors fully correlated within the same structural category, which is much more realistic than the original assumption. A new error assessment procedure is developed which performs the collocation twice, each for one type of errors, and then sums up the two types of errors. The new procedure is also fully backward compatible with the original triple collocation. Error assessment experiments are carried out for surface soil moisture data from multiple remote sensing models, land surface models, and in situ measurements.
NASA Technical Reports Server (NTRS)
Stacy, J. E.
1984-01-01
Asymmetric spline surfaces appear useful for the design of high-quality general optical systems (systems without symmetries). A spline influence function defined as the actual surface resulting from a simple perturbation in the spline definition array shows that a subarea is independent of others four or more points away. Optimization methods presented in this paper are used to vary a reflective spline surface near the focal plane of a decentered Schmidt-Cassegrain to reduce rms spot radii by a factor of 3 across the field.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
Functional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach
Huang, Jianhua
Functional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach, Rice & Wu, 2001, Huang, Wu & Zhou, 2002). The local polynomial method (Fan & Gijbels, 1996) has been ABSTRACT. We propose a global smoothing method based on polynomial splines for the es- timation
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.
2013-01-01
Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.
Covariance modeling in geodetic applications of collocation
NASA Astrophysics Data System (ADS)
Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko
2014-05-01
Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.
Supporting Collocation Learning with a Digital Library
ERIC Educational Resources Information Center
Wu, Shaoqun; Franken, Margaret; Witten, Ian H.
2010-01-01
Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…
Smoothing Strange Attractors Using Splines
Junheng Luo; Dominique Thiebaut
1995-12-31
A noise-reduction algorithm for time-series of non-linear systems is presented. The algorithm smoothes the attractors in phase space using B-splines, allowing a more accurate measure of their dynamics. The algorithm is tested on numerical and experimental data. It is linear in complexity, and can be applied to embeddings of any dimension.
Spline regression hashing for fast image search.
Liu, Yang; Wu, Fei; Yang, Yi; Zhuang, Yueting; Hauptmann, Alexander G
2012-10-01
Techniques for fast image retrieval over large databases have attracted considerable attention due to the rapid growth of web images. One promising way to accelerate image search is to use hashing technologies, which represent images by compact binary codewords. In this way, the similarity between images can be efficiently measured in terms of the Hamming distance between their corresponding binary codes. Although plenty of methods on generating hash codes have been proposed in recent years, there are still two key points that needed to be improved: 1) how to precisely preserve the similarity structure of the original data and 2) how to obtain the hash codes of the previously unseen data. In this paper, we propose our spline regression hashing method, in which both the local and global data similarity structures are exploited. To better capture the local manifold structure, we introduce splines developed in Sobolev space to find the local data mapping function. Furthermore, our framework simultaneously learns the hash codes of the training data and the hash function for the unseen data, which solves the out-of-sample problem. Extensive experiments conducted on real image datasets consisting of over one million images show that our proposed method outperforms the state-of-the-art techniques. PMID:22801510
Beamforming with collocated microphone arrays
NASA Astrophysics Data System (ADS)
Lockwood, Michael E.; Jones, Douglas L.; Su, Quang; Miles, Ronald N.
2003-10-01
A collocated microphone array, including three gradient microphones with different orientations and one omnidirectional microphone, was used to acquire data in a sound-treated room and in an outdoor environment. This arrangement of gradient microphones represents an acoustic vector sensor used in air. Beamforming techniques traditionally associated with much larger uniformly spaced arrays of omnidirectional sensors are extended to this compact array (1 cm3) with encouraging results. A frequency-domain minimum-variance beamformer was developed to work with this array. After a calibration of the array, the recovery of sources from any direction is achieved with high fidelity, even in the presence of multiple interferers. SNR gains of 5-12 dB with up to four speech sources were obtained with both indoor and outdoor recordings. This algorithm has been developed for new MEMS-type microphones that further reduce the size of the sensor array.
Spline interpolation techniques applied to the study of geophysical data
NASA Astrophysics Data System (ADS)
Mariani, Maria C.; Basu, Kanadpriya
2015-06-01
This work is devoted to the study of geophysical data by using different spline interpolation techniques. A spatial analysis of the California earthquakes geological data was performed, some of the methods proved to be more efficient than others depending on the number of data points considered. Overall, this class of interpolation surface proved to be a very powerful tool for analyzing geophysical data.
ERIC Educational Resources Information Center
Goudarzi, Zahra; Moini, M. Raouf
2012-01-01
Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…
Stochastic dynamic models and Chebyshev splines
Fan, Ruzong; Zhu, Bin; Wang, Yuedong
2015-01-01
In this article, we establish a connection between a stochastic dynamic model (SDM) driven by a linear stochastic differential equation (SDE) and a Chebyshev spline, which enables researchers to borrow strength across fields both theoretically and numerically. We construct a differential operator for the penalty function and develop a reproducing kernel Hilbert space (RKHS) induced by the SDM and the Chebyshev spline. The general form of the linear SDE allows us to extend the well-known connection between an integrated Brownian motion and a polynomial spline to a connection between more complex diffusion processes and Chebyshev splines. One interesting special case is connection between an integrated Ornstein–Uhlenbeck process and an exponential spline. We use two real data sets to illustrate the integrated Ornstein–Uhlenbeck process model and exponential spline model and show their estimates are almost identical. PMID:26045632
NASA Astrophysics Data System (ADS)
Stevens, D.; Power, H.; Meng, C. Y.; Howard, D.; Cliffe, K. A.
2013-12-01
This work proposes an alternative decomposition for local scalable meshless RBF collocation. The proposed method operates on a dataset of scattered nodes that are placed within the solution domain and on the solution boundary, forming a small RBF collocation system around each internal node. Unlike other meshless local RBF formulations that are based on a generalised finite difference (RBF-FD) principle, in the proposed "finite collocation" method the solution of the PDE is driven entirely by collocation of PDE governing and boundary operators within the local systems. A sparse global collocation system is obtained not by enforcing the PDE governing operator, but by assembling the value of the field variable in terms of the field value at neighbouring nodes. In analogy to full-domain RBF collocation systems, communication between stencils occurs only over the stencil periphery, allowing the PDE governing operator to be collocated in an uninterrupted manner within the stencil interior. The local collocation of the PDE governing operator allows the method to operate on centred stencils in the presence of strong convective fields; the reconstruction weights assigned to nodes in the stencils being automatically adjusted to represent the flow of information as dictated by the problem physics. This "implicit upwinding" effect mitigates the need for ad-hoc upwinding stencils in convective dominant problems. Boundary conditions are also enforced within the local collocation systems, allowing arbitrary boundary operators to be imposed naturally within the solution construction. The performance of the method is assessed using a large number of numerical examples with two steady PDEs; the convection-diffusion equation, and the Lamé-Navier equations for linear elasticity. The method exhibits high-order convergence in each case tested (greater than sixth order), and the use of centred stencils is demonstrated for convective-dominant problems. In the case of linear elasticity, the stress fields are reproduced to the same degree of accuracy as the displacement field, and exhibit the same order of convergence. The method is also highly stable towards variations in basis function flatness, demonstrating significantly improved stability in comparison to finite-difference type RBF collocation methods.
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V., II
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
NASA Astrophysics Data System (ADS)
Ma, Xiang; Zabaras, Nicholas
2009-05-01
In recent years, there has been a growing interest in analyzing and quantifying the effects of random inputs in the solution of ordinary/partial differential equations. To this end, the spectral stochastic finite element method (SSFEM) is the most popular method due to its fast convergence rate. Recently, the stochastic sparse grid collocation method has emerged as an attractive alternative to SSFEM. It approximates the solution in the stochastic space using Lagrange polynomial interpolation. The collocation method requires only repetitive calls to an existing deterministic solver, similar to the Monte Carlo method. However, both the SSFEM and current sparse grid collocation methods utilize global polynomials in the stochastic space. Thus when there are steep gradients or finite discontinuities in the stochastic space, these methods converge very slowly or even fail to converge. In this paper, we develop an adaptive sparse grid collocation strategy using piecewise multi-linear hierarchical basis functions. Hierarchical surplus is used as an error indicator to automatically detect the discontinuity region in the stochastic space and adaptively refine the collocation points in this region. Numerical examples, especially for problems related to long-term integration and stochastic discontinuity, are presented. Comparisons with Monte Carlo and multi-element based random domain decomposition methods are also given to show the efficiency and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
NASA Astrophysics Data System (ADS)
Kuczy?ski, Pawe?; Bia?ecki, Ryszard
2014-06-01
The paper deals with a solution of radiation heat transfer problems in enclosures filled with nonparticipating medium using ray tracing on hierarchical ortho-Cartesian meshes. The idea behind the approach is that radiative heat transfer problems can be solved on much coarser grids than their counterparts from computational fluid dynamics (CFD). The resulting code is designed as an add-on to OpenFOAM, an open-source CFD program. Ortho-Cartesian mesh involving boundary elements is created based upon CFD mesh. Parametric non-uniform rational basis spline (NURBS) surfaces are used to define boundaries of the enclosure, allowing for dealing with domains of complex shapes. Algorithm for determining random, uniformly distributed locations of rays leaving NURBS surfaces is described. The paper presents results of test cases assuming gray diffusive walls. In the current version of the model the radiation is not absorbed within gases. However, the ultimate aim of the work is to upgrade the functionality of the model, to problems in absorbing, emitting and scattering medium projecting iteratively the results of radiative analysis on CFD mesh and CFD solution on radiative mesh.
Technology Transfer Automated Retrieval System (TEKTRAN)
Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...
Implicit B-spline surface reconstruction.
Rouhani, Mohammad; Sappa, Angel D; Boyer, Edmond
2015-01-01
This paper presents a fast and flexible curve, and surface reconstruction technique based on implicit B-spline. This representation does not require any parameterization and it is locally supported. This fact has been exploited in this paper to propose a reconstruction technique through solving a sparse system of equations. This method is further accelerated to reduce the dimension to the active control lattice. Moreover, the surface smoothness and user interaction are allowed for controlling the surface. Finally, a novel weighting technique has been introduced in order to blend small patches and smooth them in the overlapping regions. The whole framework is very fast and efficient and can handle large cloud of points with very low computational cost. The experimental results show the flexibility and accuracy of the proposed algorithm to describe objects with complex topologies. Comparisons with other fitting methods highlight the superiority of the proposed approach in the presence of noise and missing data. PMID:25373084
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Mars Mission Optimization Based on Collocation of Resources
NASA Technical Reports Server (NTRS)
Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.
2003-01-01
This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.
Septic spline solutions of sixth-order boundary value problems
NASA Astrophysics Data System (ADS)
Siddiqi, Shahid S.; Akram, Ghazala
2008-05-01
Septic spline is used for the numerical solution of the sixth-order linear, special case boundary value problem. End conditions for the definition of septic spline are derived, consistent with the sixth-order boundary value problem. The algorithm developed approximates the solution and their higher-order derivatives. The method has also been proved to be second-order convergent. Three examples are considered for the numerical illustrations of the method developed. The method developed in this paper is also compared with that developed in [M. El-Gamel, J.R. Cannon, J. Latour, A.I. Zayed, Sinc-Galerkin method for solving linear sixth order boundary-value problems, Mathematics of Computation 73, 247 (2003) 1325-1343], as well and is observed to be better.
Flexible coiled spline securely joins mating cylinders
NASA Technical Reports Server (NTRS)
Coppernol, R. W.
1966-01-01
Mating cylindrical members are joined by spline to form an integral structure. The spline is made of tightly coiled, high tensile-strength steel spiral wire that fits a groove between the mating members. It provides a continuous bearing surface for axial thrust between the members.
Multicategorical Spline Model for Item Response Theory.
ERIC Educational Resources Information Center
Abrahamowicz, Michal; Ramsay, James O.
1992-01-01
A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)
Smoothing splines: Regression, derivatives and deconvolution
NASA Technical Reports Server (NTRS)
Rice, J.; Rosenblatt, M.
1982-01-01
The statistical properties of a cubic smoothing spline and its derivative are analyzed. It is shown that unless unnatural boundary conditions hold, the integrated squared bias is dominated by local effects near the boundary. Similar effects are shown to occur in the regularized solution of a translation-kernel intergral equation. These results are derived by developing a Fourier representation for a smoothing spline.
Calculating the 2D motion of lumbar vertebrae using splines.
McCane, Brendan; King, Tamara I; Abbott, J Haxby
2006-01-01
In this study we investigate the use of splines and the ICP method [Besl, P., McKay, N., 1992. A method for registration of 3d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 239-256.] for calculating the transformation parameters for a rigid body undergoing planar motion parallel to the image plane. We demonstrate the efficacy of the method by estimating the finite centre of rotation and angle of rotation from lateral flexion/extension radiographs of the lumbar spine. In an in vitro error study, the method displayed an average error of rotation of 0.44 +/- 0.45 degrees, and an average error in FCR calculation of 7.6 +/- 8.5 mm. The method was shown to be superior to that of Crisco et al. [Two-dimensional rigid-body kinematics using image contour registration. Journal of Biomechanics 28(1), 119-124.] and Brinckmann et al. [Quantification of overload injuries of the thoracolumbar spine in persons exposed to heavy physical exertions or vibration at the workplace: Part i - the shape of vertebrae and intervertebral discs - study of a yound, healthy population and a middle-aged control group. Clinical Biomechanics Supplement 1, S5-S83.] for the tests performed here. In general, we believe the use of splines to represent planar shapes to be superior to using digitised curves or landmarks for several reasons. First, with appropriate software, splines require less effort to define and are a compact representation, with most vertebra outlines using less than 30 control points. Second, splines are inherently sub-pixel representations of curves, even if the control points are limited to pixel resolutions. Third, there is a well-defined method (the ICP algorithm) for registering shapes represented as splines. Finally, like digitised curves, splines are able to represent a large class of shapes with little effort, but reduce potential segmentation errors from two dimensions (parallel and perpendicular to the image gradient) to just one (parallel to the image gradient). We have developed an application for performing all the necessary computations which can be downloaded from http://www.claritysmart.com. PMID:16325826
A fully spectral collocation approximation for multi-dimensional fractional Schrödinger equations
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Abdelkawy, M. A.
2015-08-01
A shifted Legendre collocation method in two consecutive steps is developed and analyzed to numerically solve one- and two-dimensional time fractional Schrödinger equations (TFSEs) subject to initial-boundary and non-local conditions. The first step depends mainly on shifted Legendre Gauss-Lobatto collocation (SL-GL-C) method for spatial discretization; an expansion in a series of shifted Legendre polynomials for the approximate solution and its spatial derivatives occurring in the TFSE is investigated. In addition, the Legendre-Gauss-Lobatto quadrature rule is established to treat the nonlocal conservation conditions. Thereby, the expansion coefficients are then determined by reducing the TFSE with its nonlocal conditions to a system of fractional differential equations (SFDEs) for these coefficients. The second step is to propose a shifted Legendre Gauss-Radau collocation (SL-GR-C) scheme, for temporal discretization, to reduce such system into a system of algebraic equations which is far easier to be solved. The proposed collocation scheme, both in temporal and spatial discretizations, is successfully extended to solve the two-dimensional TFSE. Numerical results are carried out to confirm the spectral accuracy and efficiency of the proposed algorithms. By selecting relatively limited Legendre Gauss-Lobatto and Gauss-Radau collocation nodes, we are able to get very accurate approximations, demonstrating the utility and high accuracy of the new approach over other numerical methods.
Corpus-Based versus Traditional Learning of Collocations
ERIC Educational Resources Information Center
Daskalovska, Nina
2015-01-01
One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…
Gauging the Effects of Exercises on Verb-Noun Collocations
ERIC Educational Resources Information Center
Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart
2014-01-01
Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials…
Data approximation using a blending type spline construction
Dalmo, Rune; Bratlie, Jostein
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.
Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture
NASA Technical Reports Server (NTRS)
Desai, Prasun N.; Conway, Bruce A.
2005-01-01
Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.
Spline-based procedures for dose-finding studies with active control
Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim
2015-01-01
In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931
K. Parand; A. R. Rezaei; A. Taghavi
2010-08-16
This paper aims to compare rational Chebyshev (RC) and Hermite functions (HF) collocation approach to solve the Volterra's model for population growth of a species within a closed system. This model is a nonlinear integro-differential equation where the integral term represents the effect of toxin. This approach is based on orthogonal functions which will be defined. The collocation method reduces the solution of this problem to the solution of a system of algebraic equations. We also compare these methods with some other numerical results and show that the present approach is applicable for solving nonlinear integro-differential equations.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
Analysis of harmonic spline gravity models for Venus and Mars
NASA Technical Reports Server (NTRS)
Bowin, Carl
1986-01-01
Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.
Enhancing Performance of Protein Name Recognizers Using Collocation Wen-Juan Hou
in biological relationship mining. This paper employs protein collocates extracted from a biological corpus be inconsistent, i.e., overlap in partial, one of them is considered as a basis. The experiments show that Yapex-based methods are employed to extract Chinese personal names, and rule-based methods are used to extract Chinese
Collocation-based sparse estimation for constructing dynamic gene networks.
Shimamura, Teppei; Imoto, Seiya; Nagasaki, Masao; Yamauchi, Mai; Yamaguchi, Rui; Fujita, André; Tamada, Yoshinori; Gotoh, Noriko; Miyano, Satoru
2010-01-01
One of the open problems in systems biology is to infer dynamic gene networks describing the underlying biological process with mathematical, statistical and computational methods. The first-order difference equation-based models such as dynamic Bayesian networks and vector autoregressive models were used to infer time-lagged relationships between genes from time-series microarray data. However, two primary problems greatly reduce the effectiveness of current approaches. The first problem is the tacit assumption that time lag is stationary. The second is the inseparability between measurement noise and process noise (unmeasured disturbances that pass through time process). To address these problems, we propose a stochastic differential equation model for inferring continuous-time dynamic gene networks under the situation in which both of the process noise and the observation noise exist. We present a collocation-based sparse estimation for simultaneous parameter estimation and model selection in the model. The collocation-based approach requires considerably less computational effort than traditional methods in ordinary stochastic differential equation models. We also incorporate various biological knowledge easily to refine the estimation accuracy with the proposed method. The results using simulated data and real time-series expression data of human primary small airway epithelial cells demonstrate that the proposed approach outperforms competing approaches and can provide significant genes influenced by gefitinib. PMID:22081598
A preliminary harmonic spline model from Magsat data
NASA Technical Reports Server (NTRS)
Shure, L.; Parker, R. L.; Langel, R. A.
1985-01-01
A preliminary main field model for 1980 derived from a carefully selected subset of Magsat vector measurements, using the method of harmonic splines, is presented. This model (PHS /80/) for preliminary harmonic splines is the smoothest model (in the sense that the rms radial field at the core surface is minimum) consistent with the measurements (with an rms misfit of 10 nT to account for crustal and external fields as well as noise in the measurement procedure). Therefore PHS (80) is more suitable for studies of the core than models derived with the traditional least squares approach (e.g., GSFC /9/80/). A comparison is conducted of the characteristics of the harmonic spline spectrum, topology of the core field and especially the null-flux curves (loci where the radial field is zero) and the flux through patches bounded by such curves. PHS (80) is less complex than GSFC (9/80) and is therefore more representative of that part of the core field that the data constrain.
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
NASA Technical Reports Server (NTRS)
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
A weighted extended B-spline solver for bending and buckling of stiffened plates
Verschaeve, Joris C G
2015-01-01
The weighted extended B-spline method [Hoellig (2003)] is applied to bending and buckling problems of plates with different shapes and stiffener arrangements. The discrete equations are obtained from the energy contributions of the different components constituting the system by means of the Rayleigh-Ritz approach. The pre-buckling or plane stress is computed by means of Airy's stress function. A boundary data extension algorithm for the weighted extended B-spline method is derived in order to solve for inhomogeneous Dirichlet boundary conditions. A series of benchmark tests is performed touching various aspects influencing the accuracy of the method.
Nonparametric Small-area Estimation Using Penalized Splines
. Introduction 2. Nonparametric regression using penalized splines 3. Nonparametric small area estimator 4. Nonparametric Regression Using Penal- ized Splines Regression model yi = m(xi) + i Function m(·) is unknown1/20 Back Close Nonparametric Small-area Estimation Using Penalized Splines Jean Opsomer Iowa State
Deconvolution using thin-plate splines
Toussaint, Udo v.; Gori, Silvio
2007-11-13
The ubiquitous problem of estimating 2-dimensional profile information from a set of line integrated measurements is tackled with Bayesian probability theory by exploiting prior information about local smoothness. For this purpose thin-plate-splines (the 2-D minimal curvature analogue of cubic-splines in 1-D) are employed. The optimal number of support points required for inversion of 2-D tomographic problems is determined using model comparison. Properties of this approach are discussed and the question of suitable priors is addressed. Finally, we illustrated the properties of this approach with 2-D inversion results using data from line-integrated measurements from fusion experiments.
Optimization of Low-Thrust Spiral Trajectories by Collocation
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Dankanich, John W.
2012-01-01
As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... space assignment policies and practices must meet the following principles: (A) An incumbent LEC's space assignment policies and practices must not materially increase a requesting carrier's collocation costs. (B) An incumbent LEC's space assignment policies and practices must not materially delay a...
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... space assignment policies and practices must meet the following principles: (A) An incumbent LEC's space assignment policies and practices must not materially increase a requesting carrier's collocation costs. (B) An incumbent LEC's space assignment policies and practices must not materially delay a...
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... space assignment policies and practices must meet the following principles: (A) An incumbent LEC's space assignment policies and practices must not materially increase a requesting carrier's collocation costs. (B) An incumbent LEC's space assignment policies and practices must not materially delay a...
NASA Technical Reports Server (NTRS)
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
Applications of the spline filter for areal filtration
NASA Astrophysics Data System (ADS)
Tong, Mingsi; Zhang, Hao; Ott, Daniel; Chu, Wei; Song, John
2015-12-01
This paper proposes a general use isotropic areal spline filter. This new areal spline filter can achieve isotropy by approximating the transmission characteristic of the Gaussian filter. It can also eliminate the effect of void areas using a weighting factor, and resolve end-effect issues by applying new boundary conditions, which replace the first order finite difference in the traditional spline formulation. These improvements make the spline filter widely applicable to 3D surfaces and extend the applications of the spline filter in areal filtration.
The Effect of Grouping and Presenting Collocations on Retention
ERIC Educational Resources Information Center
Akpinar, Kadriye Dilek; Bardakçi, Mehmet
2015-01-01
The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types…
Collocation Translation Acquisition Using Monolingual Corpora Microsoft Research Asia
Collocation Translation Acquisition Using Monolingual Corpora Yajuan LÜ Microsoft Research Asia 5F ZHOU Microsoft Research Asia 5F Sigma Center, No. 49 Zhichun Road, Haidian District, Beijing, China, 100080 mingzhou@microsoft.com Abstract Collocation translation is important for machine translation
Simultaneous Operation of Multiple Collocated Radios and the Scanning Problem
Kranakis, Evangelos
Simultaneous Operation of Multiple Collocated Radios and the Scanning Problem Michel Barbeau collocated radios refers to the capability of a wireless device to operate at the same time into several in multiple modes. Each wireless communication unit is called a radio interface. The software defined radio
APTS: Nonparametric smoothing Lab 1: Splines
Diggle, Peter J.
) Great barrier reef. This is a univariate version of the data (called gbr) used in the lectures. Only two.cartoon(radiocarbon) Great barrier reef. pspline.cartoon(gbr) 1 #12;The function fits a P-spline model to the data and shows
Periodic Smoothing Splines Hiroyuki Kano a
Egerstedt, Magnus
Takahashi c , Clyde Martin d a Department of Information Sciences, Tokyo Denki University, Hatoyama, Hiki-gun and requires additional machin- ery. The need for such splines arises whenever there is a need to construct of papers, [6,8,12,14,13]. We use a specific technique developed in [12]. That is, we use the dynamics
B-spline design of digital FIR filter using evolutionary computation techniques
NASA Astrophysics Data System (ADS)
Swain, Manorama; Panda, Rutuparna
2011-10-01
In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.
Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year
NASA Astrophysics Data System (ADS)
Kamaruddin, Halim Shukri; Ismail, Noriszura
2014-06-01
Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.
Pierson, Jeffery L; Small, Scott R; Rodriguez, Jose A; Kang, Michael N; Glassman, Andrew H
2015-07-01
Design parameters affecting initial mechanical stability of tapered, splined modular titanium stems (TSMTSs) are not well understood. Furthermore, there is considerable variability in contemporary designs. We asked if spline geometry and stem taper angle could be optimized in TSMTS to improve mechanical stability to resist axial subsidence and increase torsional stability. Initial stability was quantified with stems of varied taper angle and spline geometry implanted in a foam model replicating 2cm diaphyseal engagement. Increased taper angle and a broad spline geometry exhibited significantly greater axial stability (+21%-269%) than other design combinations. Neither taper angle nor spline geometry significantly altered initial torsional stability. PMID:25754255
Yankov, A.; Downar, T.
2013-07-01
Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)
Semisupervised feature selection via spline regression for video semantic recognition.
Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang
2015-02-01
To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods. PMID:25608288
Construction of Wavelets Basis for the Fibonacci Chain via the Spline Functions of Various Orders
NASA Astrophysics Data System (ADS)
Andrle, Miroslav
2002-06-01
We present here wavelets of class Cn(R) living on a sequence of aperiodic discretization of R, known as the Fibonacci chain, constructed via the splines functions. The construction method is based on an algorithm published by G. Bernau. Corresponding multiresolution analysis is defined and numerical examples of linear scaling functions and wavelets are shown.
Representing flexible endoscope shapes with hermite splines
NASA Astrophysics Data System (ADS)
Chen, Elvis C. S.; Fowler, Sharyle A.; Hookey, Lawrence C.; Ellis, Randy E.
2010-02-01
Navigation of a flexible endoscope is a challenging surgical task: the shape of the end effector of the endoscope, interacting with surrounding tissues, determine the surgical path along which the endoscope is pushed. We present a navigational system that visualized the shape of the flexible endoscope tube to assist gastrointestinal surgeons in performing Natural Orifice Translumenal Endoscopic Surgery (NOTES). The system used an electromagnetic positional tracker, a catheter embedded with multiple electromagnetic sensors, and graphical user interface for visualization. Hermite splines were used to interpret the position and direction outputs of the endoscope sensors. We conducted NOTES experiments on live swine involving 6 gastrointestinal and 6 general surgeons. Participants who used the device first were 14.2% faster than when not using the device. Participants who used the device second were 33.6% faster than the first session. The trend suggests that spline-based visualization is a promising adjunct during NOTES procedures.
Curvilinear bicubic spline fit interpolation scheme
NASA Technical Reports Server (NTRS)
Chi, C.
1973-01-01
Modification of the rectangular bicubic spline fit interpolation scheme so as to make it suitable for use with a polar grid pattern. In the proposed modified scheme the interpolation function is expressed in terms of the radial length and the arc length, and the shape of the patch, which is a wedge or a truncated wedge, is taken into account implicitly. Examples are presented in which the proposed interpolation scheme was used to reproduce the equations of a hemisphere.
NASA Astrophysics Data System (ADS)
Korshunov, Andrei; Shershnev, Vladimir; Korshunova, Ksenia
2015-08-01
Methods of designing blades grids of power machines, such as equal thickness shape built on middle-line arc, or methods based on target stress spreading were invented long time ago, well described and still in use. Science and technology has moved far from that time and laboriousness of experimental research, which were involving unique equipment, requires development of new robust and flexible methods of design, which will determine the optimal geometry of flow passage.This investigation provides simple and universal method of designing blades, which, in comparison to the currently used methods, requires significantly less input data but still provides accurate results. The described method is purely analytical for both concave and convex sides of the blade, and therefore lets to describe the curve behavior down the flow path at any point. Compared with the blade grid designs currently used in industry, geometric parameters of the designs constructed with this method show the maximum deviation below 0.4%.
the method to a very ('hal- lenging text l)~tir: a stock market 1)ul- let;in in Japanese and il;s abstract.ined in the dictionaric's of economic tel?IllS. 1 Introduction In the field of machitm translation, there is a, growing-conlpile, the automatic extra(:tion of bilin- gual collocations is needed. A number of studies haw> a
Single-grid spectral collocation for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte
1988-01-01
The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.
Usability Study of Two Collocated Prototype System Displays
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.
2007-01-01
Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.
NASA Astrophysics Data System (ADS)
González-Vidal, J. J.; Pérez-Pueyo, R.; Soneira, M. J.; Ruiz-Moreno, S.
2014-06-01
A new method has been developed for denoising a Raman spectrum using mathematical morphology combined with P-splines fitting, which requires no user input. It was applied to spectra measured on art works, resolving successfully the Raman information.
Subcell resolution in simplex stochastic collocation for spatial discontinuities
Witteveen, Jeroen A.S.; Iaccarino, Gianluca
2013-10-15
Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC–SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers’ equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC–SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.
Nonlinear registration using B-spline feature approximation and image similarity
NASA Astrophysics Data System (ADS)
Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il
2001-07-01
The warping methods are broadly classified into the image-matching method based on similar pixel intensity distribution and the feature-matching method using distinct anatomical feature. Feature based methods may fail to match local variation of two images. However, the method globally matches features well. False matches corresponding to local minima of the underlying energy functions can be obtained through the similarity based methods. To avoid local minima problem, we proposes non-linear deformable registration method utilizing global information of feature matching and the local information of image matching. To define the feature, gray matter and white matter of brain tissue are segmented by Fuzzy C-Mean (FCM) algorithm. B-spline approximation technique is used for feature matching. We use a multi-resolution B-spline approximation method which modifies multilevel B-spline interpolation method. It locally changes the resolution of the control lattice in proportion to the distance between features of two images. Mutual information is used for similarity measure. The deformation fields are locally refined until maximize the similarity. In two 3D T1 weighted MRI test, this method maintained the accuracy by conventional image matching methods without the local minimum problem.
The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan
1995-01-01
The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.
A mixed basis density functional approach for low dimensional systems with B-splines
NASA Astrophysics Data System (ADS)
Ren, Chung-Yuan; Hsue, Chen-Shiung; Chang, Yia-Chung
2015-03-01
A mixed basis approach based on density functional theory is employed for low dimensional systems. The basis functions are taken to be plane waves for the periodic direction multiplied by B-spline polynomials in the non-periodic direction. B-splines have the following advantages: (1) the associated matrix elements are sparse, (2) B-splines possess a superior treatment of derivatives, (3) B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With this mixed basis set we can directly calculate the total energy of the system instead of using the conventional supercell model with a slab sandwiched between vacuum regions. A generalized Lanczos-Krylov iterative method is implemented for the diagonalization of the Hamiltonian matrix. To demonstrate the present approach, we apply it to study the C(001)-(2×1) surface with the norm-conserving pseudopotential, the n-type ?-doped graphene, and graphene nanoribbon with Vanderbilt's ultra-soft pseudopotentials. All the resulting electronic structures were found to be in good agreement with those obtained by the VASP code, but with a reduced number of basis.
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
NASA Astrophysics Data System (ADS)
Lakestani, Mehrdad; Dehghan, Mehdi
2010-05-01
Two numerical techniques are presented for solving the solution of Riccati differential equation. These methods use the cubic B-spline scaling functions and Chebyshev cardinal functions. The methods consist of expanding the required approximate solution as the elements of cubic B-spline scaling function or Chebyshev cardinal functions. Using the operational matrix of derivative, we reduce the problem to a set of algebraic equations. Some numerical examples are included to demonstrate the validity and applicability of the new techniques. The methods are easy to implement and produce very accurate results.
ECS 178 Course Notes THE B-SPLINE PATCH
California at Davis, University of
ECS 178 Course Notes THE B-SPLINE PATCH Kenneth I. Joy Institute for Data Analysis and Visualization Department of Computer Science University of California, Davis Overview The B-spline patch a sequence of control points, the patch is a function of two variables with an array of control points
Variable Selection for Nonparametric Quantile Regression via Smoothing Spline ANOVA
Davidian, Marie
Variable Selection for Nonparametric Quantile Regression via Smoothing Spline ANOVA Chen-Yen Lin for the conditional quantile can be overly restrictive. Nonparametric quantile regression has recently become a viable the approach of nonparametric quantile regression via regularization in the context of smoothing spline ANOVA
Instant Trend-Seasonal Decomposition of Time Series with Splines
Krivobokova, Tatyana
Instant Trend-Seasonal Decomposition of Time Series with Splines Luis Francisco Rosales1 Tatyana to decompose a times series into trend, seasonal and remainder components. This fully data-driven technique is based on penalized splines and makes an explicit characterization of the varying seasonality
Regression Spline Smoothing using the Minimum Description Length Principle \\Lambda
Lee, Thomas
have been proposed in the literature. These include kernel/local polynomial regression, smoothingRegression Spline Smoothing using the Minimum Description Length Principle \\Lambda Thomas C. M. Lee regression spline to the noisy observations, and one important component of this approach is the choice
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ?-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
Efficient Shape Priors for Spline-Based Snakes.
Delgado-Gonzalo, Ricard; Schmitter, Daniel; Uhlmann, Virginie; Unser, Michael
2015-11-01
Parametric active contours are an attractive approach for image segmentation, thanks to their computational efficiency. They are driven by application-dependent energies that reflect the prior knowledge on the object to be segmented. We propose an energy involving shape priors acting in a regularization-like manner. Thereby, the shape of the snake is orthogonally projected onto the space that spans the affine transformations of a given shape prior. The formulation of the curves is continuous, which provides computational benefits when compared with landmark-based (discrete) methods. We show that this approach improves the robustness and quality of spline-based segmentation algorithms, while its computational overhead is negligible. An interactive and ready-to-use implementation of the proposed algorithm is available and was successfully tested on real data in order to segment Drosophila flies and yeast cells in microscopic images. PMID:26353353
Locally Refined Splines Representation for Geospatial Big Data
NASA Astrophysics Data System (ADS)
Dokken, T.; Skytt, V.; Barrowclough, O.
2015-08-01
When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines) allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.
Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.
Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco
2015-04-20
Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations. PMID:25969102
Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio J D; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A
2013-10-01
Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. PMID:24108269
The solution of singular optimal control problems using direct collocation and nonlinear programming
NASA Astrophysics Data System (ADS)
Downey, James R.; Conway, Bruce A.
1992-08-01
This paper describes work on the determination of optimal rocket trajectories which may include singular arcs. In recent years direct collocation and nonlinear programming has proven to be a powerful method for solving optimal control problems. Difficulties in the application of this method can occur if the problem is singular. Techniques exist for solving singular problems indirectly using the associated adjoint formulation. Unfortunately, the adjoints are not a part of the direct formulation. It is shown how adjoint information can be obtained from the direct method to allow the solution of singular problems.
Wavelets based on splines: an application
NASA Astrophysics Data System (ADS)
Srinivasan, Pramila; Jamieson, Leah H.
1996-10-01
In this paper, we describe the theory and implementation of a variable rate speech coder using the cubic spline wavelet decomposition. In the discrete time wavelet extrema representation, Cvetkovic, et. al. implement an iterative projection algorithm to reconstruct the wavelet decomposition from the extrema representation. Based on this model, prior to this work, we have described a technique for speech coding using the extrema representation which suggests that the non-decimated extrema representation allows us to exploit the pitch redundancy in speech. A drawback of the above scheme is the audible perceptual distortion due to the iterative algorithm which fails to converge on some speech frames. This paper attempts to alleviate the problem by showing that for a particular class of wavelets that implements the ladder of spaces consisting of the splines, the iterative algorithm can be replaced by an interpolation procedure. Conditions under which the interpolation reconstructs the transform exactly are identified. One of the advantages of the extrema representation is the 'denoising' effect. A least squares technique to reconstruct the signal is constructed. The effectiveness of the scheme in reproducing significant details of the speech signal is illustrated using an example.
Continuous Groundwater Monitoring Collocated at USGS Streamgages
NASA Astrophysics Data System (ADS)
Constantz, J. E.; Eddy-Miller, C.; Caldwell, R.; Wheeer, J.; Barlow, J.
2012-12-01
USGS Office of Groundwater funded a 2-year pilot study collocating groundwater wells for monitoring water level and temperature at several existing continuous streamgages in Montana and Wyoming, while U.S. Army Corps of Engineers funded enhancement to streamgages in Mississippi. To increase spatial relevance with in a given watershed, study sites were selected where near-stream groundwater was in connection with an appreciable aquifer, and where logistics and cost of well installations were considered representative. After each well installation and surveying, groundwater level and temperature were easily either radio-transmitted or hardwired to existing data acquisition system located in streamgaging shelter. Since USGS field personnel regularly visit streamgages during routine streamflow measurements and streamgage maintenance, the close proximity of observation wells resulted in minimum extra time to verify electronically transmitted measurements. After field protocol was tuned, stream and nearby groundwater information were concurrently acquired at streamgages and transmitted to satellite from seven pilot-study sites extending over nearly 2,000 miles (3,200 km) of the central US from October 2009 until October 2011, for evaluating the scientific and engineering add-on value of the enhanced streamgage design. Examination of the four-parameter transmission from the seven pilot study groundwater gaging stations reveals an internally consistent, dynamic data suite of continuous groundwater elevation and temperature in tandem with ongoing stream stage and temperature data. Qualitatively, the graphical information provides appreciation of seasonal trends in stream exchanges with shallow groundwater, as well as thermal issues of concern for topics ranging from ice hazards to suitability of fish refusia, while quantitatively this information provides a means for estimating flux exchanges through the streambed via heat-based inverse-type groundwater modeling. In June USGS Fact Sheet 2012-3054 was released online, summarizing the results of the pilot project.
Collocation and Pattern Recognition Effects on System Failure Remediation
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Press, Hayes N.
2007-01-01
Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.
Approximate merging of B-spline curves via knot adjustment and constrained optimization
Tai, Chiew-Lan
Approximate merging of B-spline curves via knot adjustment and constrained optimization Chiew of approximate merging of two adjacent B-spline curves into one B-spline curve. The basic idea of the approach is to find the conditions for precise merging of two B-spline curves, and perturb the control points
NASA Astrophysics Data System (ADS)
Hoffman, G. H.
1982-01-01
A model strong interaction problem for two-dimensional laminar flow is solved numerically. The method makes use of the parabolized vorticity approximation in conjunction with fourth-order accurate polynomial splines to resolve the wall shear layer with a relatively sparse grid. A sheared wall fitted coordinate mapping is used which produces discontinous coefficients in the governing differential equations. These discontinuities are treated in an exact way numerically. The spline-finite difference equations, which result from the discretization, are solved as a coupled system by single line overrelaxation plus a Newton-Raphson iteration to take care of the nonlinearity. Numerical results are presented for six cases consisting of five wall geometries and two Reynolds numbers (10,000 and 100,000). Comparisons are made with potential flow-boundary layer calculations. The method is found to be an efficient way of treating the model strong interaction problem even when thin separated zones are present.
Acoustic ranging of small arms fire using a single sensor node collocated with the target.
Lo, Kam W; Ferguson, Brian G
2015-06-01
A ballistic model-based method, which builds upon previous work by Lo and Ferguson [J. Acoust. Soc. Am. 132, 2997-3017 (2012)], is described for ranging small arms fire using a single acoustic sensor node collocated with the target, without a priori knowledge of the muzzle speed and ballistic constant of the bullet except that they belong to a known two-dimensional parameter space. The method requires measurements of the differential time of arrival and differential angle of arrival of the muzzle blast and ballistic shock wave at the sensor node. Its performance is evaluated using both simulated and real data. PMID:26093450
An Exploratory Study of Collocational Use by ESL Students--A Task Based Approach
ERIC Educational Resources Information Center
Fan, May
2009-01-01
Collocation is an aspect of language generally considered arbitrary by nature and problematic to L2 learners who need collocational competence for effective communication. This study attempts, from the perspective of L2 learners, to have a deeper understanding of collocational use and some of the problems involved, by adopting a task based…
Gu, Renliang E-mail: ald@iastate.edu; Dogandži?, Aleksandar E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandži?, Aleksandar
2015-03-01
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov's proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
NASA Astrophysics Data System (ADS)
Wang, Qiuju; Ren, Yu-Xin
2015-03-01
Spline schemes are proposed to simulate compressible flows on non-uniform structured grid in the framework of finite volume methods. The cubic spline schemes in the present paper can achieve fourth and third order accuracy on the uniform and non-uniform grids respectively. Due to the continuity of cubic spline polynomial function, the inviscid flux can be computed directly from the reconstructed spline polynomial without using the Riemann solvers or other flux splitting techniques. Isotropic and anisotropic artificial viscosity models are introduced to damp high frequency numerical disturbances and to enhance the numerical stability. The first derivatives that are used to calculate the viscous flux are directly obtained from the cubic spline polynomials and preserve second order accuracy on both uniform and non-uniform grids. A hybrid scheme, in which the spline scheme is blended with shock-capturing WENO scheme, is developed to deal with flow discontinuities. Benchmark test cases of inviscid/viscous flows are presented to demonstrate the accuracy, robustness and efficiency of the proposed schemes.
Construction of spline functions in spreadsheets to smooth experimental data
Technology Transfer Automated Retrieval System (TEKTRAN)
A previous manuscript detailed how spreadsheet software can be programmed to smooth experimental data via cubic splines. This addendum corrects a few errors in the previous manuscript and provides additional necessary programming steps. ...
Detail view of redwood spline joinery of woodframe section against ...
Detail view of redwood spline joinery of wood-frame section against adobe addition (measuring tape denotes plumb line from center of top board) - First Theatre in California, Southwest corner of Pacific & Scott Streets, Monterey, Monterey County, CA
Wiley, David F; Bertram, Martin; Hamann, Bernd
2004-01-01
We present a method for the hierarchical approximation of functions in one, two, or three variables based on the finite element method (Ritz approximation). Starting with a set of data sites with associated function, we first determine a smooth (scattered-data) interpolant. Next, we construct an initial triangulation by triangulating the region bounded by the minimal subset of data sites defining the convex hull of all sites. We insert only original data sites, thus reducing storage requirements. For each triangulation, we solve a minimization problem: computing the best linear spline approximation of the interpolant of all data, based on a functional involving function values and first derivatives. The error of a best linear spline approximation is computed in a Sobolev-like norm, leading to element-specific error values. We use these interval/triangle/tetrahedron-specific values to identify the element to subdivide next. The subdivision of an element with largest error value requires the recomputation of all spline coefficients due to the global nature of the problem. We improve efficiency by 1) subdividing multiple elements simultaneously and 2) by using a sparse-matrix representation and system solver. PMID:15794137
Noise correction on LANDSAT images using a spline-like algorithm
NASA Technical Reports Server (NTRS)
Vijaykumar, N. L. (principal investigator); Dias, L. A. V.
1985-01-01
Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.
The Collocation of Measurement Points in Large Open Indoor Environment
Wang, Xinbing
Different from outdoor localization that is addressed by Global Positioning System (GPS), indoorThe Collocation of Measurement Points in Large Open Indoor Environment Kaikai Sheng, Zhicheng Gu to facilitate the indoor localiza- tion since it is effective and requires no pre-deployment. However, in large
NASA Astrophysics Data System (ADS)
Fan, Y. R.; Huang, W. W.; Li, Y. P.; Huang, G. H.; Huang, K.
2015-11-01
In this study, a coupled ensemble filtering and probabilistic collocation (EFPC) approach is proposed for uncertainty quantification of hydrologic models. This approach combines the capabilities of the ensemble Kalman filter (EnKF) and the probabilistic collocation method (PCM) to provide a better treatment of uncertainties in hydrologic models. The EnKF method would be employed to approximate the posterior probabilities of model parameters and improve the forecasting accuracy based on the observed measurements; the PCM approach is proposed to construct a model response surface in terms of the posterior probabilities of model parameters to reveal uncertainty propagation from model parameters to model outputs. The proposed method is applied to the Xiangxi River, located in the Three Gorges Reservoir area of China. The results indicate that the proposed EFPC approach can effectively quantify the uncertainty of hydrologic models. Even for a simple conceptual hydrological model, the efficiency of EFPC approach is about 10 times faster than traditional Monte Carlo method without obvious decrease in prediction accuracy. Finally, the results can explicitly reveal the contributions of model parameters to the total variance of model predictions during the simulation period.
ERIC Educational Resources Information Center
Yamashita, Junko; Jiang, Nan
2010-01-01
This study investigated first language (L1) influence on the acquisition of second language (L2) collocations using a framework based on Kroll and Stewart (1994) and Jiang (2000), by comparing the performance on a phrase-acceptability judgment task among native speakers of English, Japanese English as a second language (ESL) users, and Japanese…
Algebraic grid generation using tensor product B-splines. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Saunders, B. V.
1985-01-01
Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.
NASA Astrophysics Data System (ADS)
Meisami-Azad, Mona; Mohammadpour, Javad; Grigoriadis, Karolos M.
2009-03-01
This paper presents explicit solutions for velocity feedback control of structural systems with collocated sensors and actuators to satisfy closed-loop \\mathcal {H}_{2} and \\mathcal {L}_2-\\mathcal {L}_\\infty norm performance specifications. First, we consider an open-loop collocated structural system and obtain upper bounds for the \\mathcal {H}_{2} and \\mathcal {L}_2-\\mathcal {L}_\\infty system norms using a solution for the linear matrix inequality formulation of the norm analysis conditions. Next, we address the problem of static output velocity feedback controller design for such systems. By employing simple algebraic tools, we derive an explicit parametrization of the controller gains which guarantee a prescribed level of \\mathcal {H}_{2} or \\mathcal {L}_2-\\mathcal {L}_\\infty norm performance of the closed-loop system. Finally, numerical examples are presented to validate the advantages of the proposed techniques. The effectiveness of the proposed bounds and output feedback control design methods become apparent, especially in very large-scale structural systems where control design methods based on the solution of Lyapunov or Riccati equations are time-consuming or intractable.
Bivariate B-splines and its Applications in Spatial Data Analysis
Pan, Huijun 1987-
2011-08-09
in the southern part of the island is an airport and the one in the north-eastern end is an oil re neries and a water puri cation plant. These two holes are not part of the domain and the spatial correlations between residency on opposite sides of each hole... domain. One is the soap smoother and the other is the nite element splines from the applied mathematics. Now we start to review these methods. 3.2.1 Finite element smoother The nite element technique is a sophisticated method in applied mathematics...
Statistica Sinica 19 (2009), 325-342 POLYNOMIAL SPLINE CONFIDENCE BANDS FOR
2009-01-01
Statistica Sinica 19 (2009), 325-342 POLYNOMIAL SPLINE CONFIDENCE BANDS FOR REGRESSION CURVES Jing nonparametric techniques are local polynomial/kernel and polynomial spline smoothing. The kernel-type estimators established in Huang (2003). Confidence bands for polynomial spline regression, however, are available only
B-spline active rays segmentation of microcalcifications in mammography
Arikidis, Nikolaos S.; Skiadopoulos, Spyros; Karahaliou, Anna; Likaki, Eleni; Panayiotakis, George; Costaridou, Lena
2008-11-15
Accurate segmentation of microcalcifications in mammography is crucial for the quantification of morphologic properties by features incorporated in computer-aided diagnosis schemes. A novel segmentation method is proposed implementing active rays (polar-transformed active contours) on B-spline wavelet representation to identify microcalcification contour point estimates in a coarse-to-fine strategy at two levels of analysis. An iterative region growing method is used to delineate the final microcalcification contour curve, with pixel aggregation constrained by the microcalcification contour point estimates. A radial gradient-based method was also implemented for comparative purposes. The methods were tested on a dataset consisting of 149 mainly pleomorphic microcalcification clusters originating from 130 mammograms of the DDSM database. Segmentation accuracy of both methods was evaluated by three radiologists, based on a five-point rating scale. The radiologists' average accuracy ratings were 3.96{+-}0.77, 3.97{+-}0.80, and 3.83{+-}0.89 for the proposed method, and 2.91{+-}0.86, 2.10{+-}0.94, and 2.56{+-}0.76 for the radial gradient-based method, respectively, while the differences in accuracy ratings between the two segmentation methods were statistically significant (Wilcoxon signed-ranks test, p<0.05). The effect of the two segmentation methods in the classification of benign from malignant microcalcification clusters was also investigated. A least square minimum distance classifier was employed based on cluster features reflecting three morphological properties of individual microcalcifications (area, length, and relative contrast). Classification performance was evaluated by means of the area under ROC curve (A{sub z}). The area and length morphologic features demonstrated a statistically significant (Mann-Whitney U-test, p<0.05) higher patient-based classification performance when extracted from microcalcifications segmented by the proposed method (0.82{+-}0.06 and 0.86{+-}0.05, respectively), as compared to segmentation by the radial gradient-based method (0.71{+-}0.08 and 0.75{+-}0.08). The proposed method demonstrates improved segmentation accuracy, fulfilling human visual criteria, and enhances the ability of morphologic features to characterize microcalcification clusters.
NASA Astrophysics Data System (ADS)
Nicolas, J.; Exertier, P.; Laurain, O.; Bonnefond, P.; Mangin, J. F.; Barlier, F.
Collocation experiments are incomparable to check geodetic instrument quality and stability. At the Grasse observatory, France, we had the opportunity to have 3 independent laser ranging stations very close one to each other (about 20m): a Satellite Laser Ranging (SLR) station, a Lunar Laser Ranging (LLR) station, and the French Transportable Laser Ranging Station (FTLRS). Thus, we used this unique configuration to perform a triple laser ranging collocation experiment between these 3 instruments from September to November 2001. The prime objective of this experiment was to qualify, to the millimeter level, the new performance of the FTLRS, after its phase of improvements. This validation was of great importance prior to the FTLRS departure to Corsica, where it was to be used to calibrate the altimeter and to validate the orbits of the oceanographic satellite Jason-1 during its initial validation phase (CAL/VAL) in 2002. A secondary objective was to determine, from the common normal point analysis, the systematic biases of the SLR, the LLR, and the FTLRS systems. Finally, the analysis of the raw data acquired during this collocation allowed also to study the difference of LAGEOS satellite response between each station. Our analysis showed the millimeter consistency between the 3 OCA laser stations, result which demonstrated the strength of the SLR technique. The FTLRS range bias value was confirmed with the 2002 Corsica CAL/VAL campaign analysis. The main part of the biases could have been explained with instrumental sources, but raw data and geometrical analysis underscored difference of LAGEOS satellite signature at the level of 3mm between the SLR and the LLR stations, difference linked to the photon detection level of each system. Herein, we summarize the analysis methods, and then we present and discuss the main results obtained from this collocation experiment.
How to fly an aircraft with control theory and splines
NASA Technical Reports Server (NTRS)
Karlsson, Anders
1994-01-01
When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.
Reconstruction of egg shape using B-spline
NASA Astrophysics Data System (ADS)
Roslan, Nurshazneem; Yahya, Zainor Ridzuan
2015-05-01
In this paper, the reconstruction of egg's outline by using piecewise parametric cubic B-spline is proposed. Reverse engineering process has been used to represent the generic shape of egg in order to acquire its geometric information in form of a two-dimensional set of points. For the curve reconstruction, the purpose is to optimize the control points of all curves such that the distance of the data points to the curve is minimized. Then, the B-spline curve functions were used for the curve fitting between the actual and the reconstructed profiles.
Stochastic analysis of unsaturated flow with probabilistic collocation method
Lu, Zhiming
; published 18 August 2009. [1] In this study, we present an efficient approach, called the probabilistic a high degree of spatial variation over various scales. The properties that control flow and transport to treat them as spatially varying random fields, characterized by the statis- tical moments
Efficient implementation of Radau collocation methods Luigi Brugnanoa
Brugnano, Luigi
, 1, . . . , (I - hA J) (k) = -G(y(k) ), y(k+1) = y(k) + (k) , (4) where I Ism, J is the Jacobian-dimensional complex system. This is the standard procedure used in the codes RADAU5 [29, 36] and RADAU [30, 36], the former a variable-step fixed-order code, and the latter a variable-order variant, both based upon Radau
Regional VTEC modeling with multivariate adaptive regression splines
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur; Nohutcu, Metin
2010-07-01
Different algorithms have been proposed for the modeling of the ionosphere. The most frequently used method is based on the spherical harmonic functions achieving successful results for global modeling but not for the local and regional applications due to the bounded spherical harmonic representation. Irregular data distribution and data gaps cause also difficulties in the global modeling of the ionosphere. In this paper we propose an efficient algorithm with Multivariate Adaptive Regression Splines (MARS) to represent a new non-parametric approach for regional spatio-temporal mapping of the ionospheric electron density using ground-based GPS observations. MARS can handle very large data sets of observations and is an adaptive and flexible method, which can be applied to both linear and non-linear problems. The basis functions are directly obtained from the observations and have space partitioning property resulting in an adaptive model. This property helps to avoid numerical problems and computational inefficiency caused by the number of coefficients, which has to be increased to detect the local variations of the ionosphere. Since the fitting procedure is additive it does not require gridding and is able to process large amounts of data with large gaps. Additionally the model complexity can be controlled by the user via limiting the maximal number of coefficients and the order of products of the basis functions. In this study the MARS algorithm is applied to real data sets over Europe for regional ionosphere modeling. The results are compared with the results of Bernese GPS Software over the same region.
Novel spline-based approach for robust strain estimation in elastography.
Alam, S Kaisar
2010-04-01
Robust strain estimation is important in elastography. However, a high signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) are sometimes attained by sacrificing resolution. We propose a least-squares-based smoothing-spline strain estimator that can produce elastograms with high SNR and CNR without significant loss of resolution. The proposed method improves strain-estimation quality by deemphasing displacements with lower correlation in computing strains. Results from finite-element simulation and phantom-experiment data demonstrate that the described strain estimator provides good SNR and CNR without degrading resolution. PMID:20687277
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
Bueno, G.; Ruiz, M.; Sanchez, S
2006-10-04
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
Bezier-and B-spline techniques Hartmut Prautzsch
Prautzsch, Hartmut
B´ezier- and B-spline techniques Hartmut Prautzsch Wolfgang Boehm Marco Paluszny March 26, 2002 #12;2 #12;To Paul de Faget de Casteljau #12;#12;Preface Computer-aided modeling techniques have been, thorough and final proofreading. Wolfenb¨uttel, Wolfgang Boehm Caracas, Marco Paluszny Karlsruhe, Hartmut
Extreme Value Mixture Modelling: P-Splines+GPD and evmix
Scarrott, Carl
Extreme Value Mixture Modelling: P-Splines+GPD and evmix 7th International Conference of the ERCIM University of Canterbury, New Zealand December 6, 2014 1 #12;Talk Outline Intro to Extreme Value Mixture Some Advice 2 #12;Why Use Extreme Value Mixture Models?2.2. KERNEL DENSITY ESTIMATION GPD Transition
Nonparametric Small Area Estimation Using Penalized Spline Regression
Nonparametric Small Area Estimation Using Penalized Spline Regression J. D. Opsomer Iowa State, it is possible to express the nonparametric small area estimation problem as a mixed effect model regression here is to incorporate nonparametric regression models in small area estimation. In principle
Spline-based deconvolution Amir Averbuch , Valery Zheludev
Averbuch, Amir
Spline-based deconvolution Amir Averbuch Ã, Valery Zheludev School of Computer Science, Tel Aviv deconvolution 2D data Noised data Harmonic analysis Approximate solutions a b s t r a c t This paper proposes robust algorithms to perform deconvolution and inversion of the heat equation starting from 1D and 2D
NEW OPTIMIZED SPLINE FUNCTIONS FOR INTERPOLATION ON THE HEXAGONAL LATTICE
Condat, Laurent
, Germany Dimitri Van De Ville Biomedical Imaging Group (BIG) Â´Ecole Polytechnique FÂ´edÂ´erale de Lausanne lattices, multi-dimensional splines, linear shift in- variant signal spaces, approximation theory. 1 is assumed to be band-limited, Shannon's theorem guaran- tees perfect reconstruction of f(x) using
Fast Selection of Spectral Variables with B-Spline Compression
Rossi, Fabrice
Fast Selection of Spectral Variables with B-Spline Compression Fabrice Rossi a,, Damien Francois b. Such problems may be encountered in the food (Ozaki et al., 1992), pharmaceutical (Blanco et al., 1999) and textile (Blanco et al., 1997) industry, to cite only a few. Viewed from a statistical or data analysis
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Defining window-boundaries for genomic analyses using smoothing spline techniques
Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia
2015-04-17
High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the data and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.
Defining window-boundaries for genomic analyses using smoothing spline techniques
Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia
2015-04-17
High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore »and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less
Xia, Shuang; Lin, Shili
2014-01-01
The behavior of a gene can be dynamic; thus, if longitudinal data are available, it is important that we study the dynamic effects of genes on a trait over time. The effect of a haplotype can be expressed by time-varying coefficients. In this paper, we use the natural cubic B-spline to express these coefficients that capture the trends of the effects of haplotypes, some of which may be rare, over time; that is, at different ages. More specifically, to capture disease-associated common and rare haplotypes and environmental factors for data from unrelated individuals, we developed a method of time-varying coefficients that uses the logistic Bayesian LASSO methodology and B-spline by setting proper prior distributions. Haplotype and environmental effect coefficients are obtained by using Markov chain Monte Carlo methods. We applied the method to analyze the MAP4 gene on chromosome 3 and have identified several haplotypes that are associated with hypertension with varying effect sizes in the range of 55 to 85 years of age. PMID:25519413
2014-01-01
The behavior of a gene can be dynamic; thus, if longitudinal data are available, it is important that we study the dynamic effects of genes on a trait over time. The effect of a haplotype can be expressed by time-varying coefficients. In this paper, we use the natural cubic B-spline to express these coefficients that capture the trends of the effects of haplotypes, some of which may be rare, over time; that is, at different ages. More specifically, to capture disease-associated common and rare haplotypes and environmental factors for data from unrelated individuals, we developed a method of time-varying coefficients that uses the logistic Bayesian LASSO methodology and B-spline by setting proper prior distributions. Haplotype and environmental effect coefficients are obtained by using Markov chain Monte Carlo methods. We applied the method to analyze the MAP4 gene on chromosome 3 and have identified several haplotypes that are associated with hypertension with varying effect sizes in the range of 55 to 85 years of age. PMID:25519413
Facilitating Non-Collocated Coexistence for WiFi and 4G Wireless Networks
Sahoo, Anirudha
Facilitating Non-Collocated Coexistence for WiFi and 4G Wireless Networks Punit Rathod Department of non- collocated coexistence of WiFi and 4G technologies such as WiMAX and LTE due to adjacent channel.4 GHz ISM band used by WiFi. We show, with measurements on our test-bed and from existing results
NASA Technical Reports Server (NTRS)
Lashmet, P. K.; Woodrow, P. T.
1975-01-01
Numerical instabilities often arise in the use of high-ordered collocation approximations for numerically solving parabolic partial differential equations. These problems may be reduced by formulations involving evaluation of collocation polynomials rather than combination of the polynomials into a power series. As an illustration, two formulations using shifted Legendre polynomials of order 26 and less are compared.
On the Effect of Gender and Years of Instruction on Iranian EFL Learners' Collocational Competence
ERIC Educational Resources Information Center
Ganji, Mansoor
2012-01-01
This study investigates the Iranian EFL learners' Knowledge of Lexical Collocation at three academic levels: freshmen, sophomores, and juniors. The participants were forty three English majors doing their B.A. in English Translation studies in Chabahar Maritime University. They took a 50-item fill-in-the-blank test of lexical collocations. The…
English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information
ERIC Educational Resources Information Center
Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji
2012-01-01
We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…
Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations
ERIC Educational Resources Information Center
Liu, Dilin
2010-01-01
Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…
Degraded Text Recognition Using Word Collocation and Visual Inter-Word Constraints
Degraded Text Recognition Using Word Collocation and Visual Inter-Word Constraints Tao Hong©cs.buffalo, edu Abstract Given a noisy text page, a word recognizer can generate a set of candidates for each word image. A relaxation algorithm was proposed previously by the authors that uses word collocation
Herbin, RaphaÃ¨le
Collocated finite volume schemes for the simulation of natural convective flows on unstructured Marseille 13, France, herbin@cmi.univ-mrs.fr SUMMARY We describe here a collocated finite volume scheme, 18, 26] and references therein. An advantage of the finite volume schemes is that the unknowns
Corpora and Collocations in Chinese-English Dictionaries for Chinese Users
ERIC Educational Resources Information Center
Xia, Lixin
2015-01-01
The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…
DRAFT! c January 7, 1999 Christopher Manning & Hinrich Schtze. 141 5 Collocations
Pratt, Vaughan
called terminology extraction). TheTERMINOLOGY EXTRACTION reader be warned, though, that the word term, and terminological phrase. As these names sug-TERM TECHNICAL TERM TERMINOLOGICAL PHRASE #12;142 5 Collocations gest, the latter three are commonly used when collocations are extracted from technical domains (in a process
fl January 7, 1999 Christopher Manning & Hinrich Schtze. 141 5 Collocations
Pratt, Vaughan
called terminology extraction). The TERMINOLOGY EXTRACTION reader be warned, though, that the word term, and terminological phrase. As these names sug TERM TECHNICAL TERM TERMINOLOGICAL PHRASE #12; 142 5 Collocations gest, the latter three are commonly used when collocations are extracted from technical domains (in a process
Evaluation of the spline reconstruction technique for PET
Kastis, George A. Kyriakopoulou, Dimitra; Gaitanis, Anastasios; Fernández, Yolanda; Hutton, Brian F.; Fokas, Athanasios S.
2014-04-15
Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real point-source studies. In all simulated phantoms, the SRT exhibits higher contrast and lower bias than FBP at all noise levels, by increasing the COV in the reconstructed images. Finally, in real studies, whereas the contrast of the cold chambers are similar for both algorithms, the SRT reconstructed images of the NEMA phantom exhibit slightly higher COV values than those of FBP. In the Derenzo phantom, SRT resolves the 2-mm separated holes slightly better than FBP. The small-animal and human reconstructions via SRT exhibit slightly higher resolution and contrast than the FBP reconstructions. Conclusions: The SRT provides images of higher resolution, higher contrast, and lower bias than FBP, by increasing slightly the noise in the reconstructed images. Furthermore, it eliminates streak artifacts outside the object boundary. Unlike other analytic algorithms, the reconstruction time of SRT is comparable with that of FBP. The source code for SRT will become available in a future release of STIR.
NASA Astrophysics Data System (ADS)
Powmya, A.; Narasimhan, M. C.
2015-06-01
Solutions, based on principle of collocating the equations of motion at Chebyshev zeroes, are presented for the free vibration analysis of laminated, polar orthotropic, circular and annular plates. The analysis is restricted to axisymmetric free vibration of the plates and employs first-order shear deformation theory for the displacement field, in terms of midplane displacements, u, ? and w. The eigenvalue problem is defined in terms of three equations of motion in terms of the radial co-ordinate r, the radial variation of the displacements being represented in polynomial series, with appropriate boundary conditions. Numerical results are presented to show the validity and accuracy of the proposed method. Results of parametric studies for laminated polar orthotropic circular and annular plates with different boundary conditions, orthotropic ratios, lamination sequences, number of layers and shear deformation are also presented.
ERIC Educational Resources Information Center
Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin
2008-01-01
Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…
BSR: B-spline atomic R-matrix codes
NASA Astrophysics Data System (ADS)
Zatsarinny, Oleg
2006-02-01
BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput. Phys. Comm. 92 (1995) 290].
Spline Driven: High Accuracy Projectors for Tomographic Reconstruction From Few Projections.
Momey, Fabien; Denis, Loic; Burnier, Catherine; Thiebaut, Eric; Becker, Jean-Marie; Desbat, Laurent
2015-12-01
Tomographic iterative reconstruction methods need a very thorough modeling of data. This point becomes critical when the number of available projections is limited. At the core of this issue is the projector design, i.e., the numerical model relating the representation of the object of interest to the projections on the detector. Voxel driven and ray driven projection models are widely used for their short execution time in spite of their coarse approximations. Distance driven model has an improved accuracy but makes strong approximations to project voxel basis functions. Cubic voxel basis functions are anisotropic, accurately modeling their projection is, therefore, computationally expensive. Both smoother and more isotropic basis functions better represent the continuous functions and provide simpler projectors. These considerations have led to the development of spherically symmetric volume elements, called blobs. Set apart their isotropy, blobs are often considered too computationally expensive in practice. In this paper, we consider using separable B-splines as basis functions to represent the object, and we propose to approximate the projection of these basis functions by a 2D separable model. When the degree of the B-splines increases, their isotropy improves and projections can be computed regardless of their orientation. The degree and the sampling of the B-splines can be chosen according to a tradeoff between approximation quality and computational complexity. We quantitatively measure the good accuracy of our model and compare it with other projectors, such as the distance-driven and the model proposed by Long et al. From the numerical experiments, we demonstrate that our projector with an improved accuracy better preserves the quality of the reconstruction as the number of projections decreases. Our projector with cubic B-splines requires about twice as many operations as a model based on voxel basis functions. Higher accuracy projectors can be used to improve the resolution of the existing systems, or to reduce the number of projections required to reach a given resolution, potentially reducing the dose absorbed by the patient. PMID:26259217
NASA Technical Reports Server (NTRS)
Rummel, R.; Sjoeberg, L.; Rapp, R. H.
1978-01-01
A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.
Brown, C; Adcock, A; Azevedo, S; Liebman, J; Bond, E
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.
Webster, Clayton G; Gunzburger, Max D
2013-01-01
We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical results and demonstrate the distinctions between the various stochastic identification objectives.
Monotonicity preserving splines using rational Ball cubic interpolation
NASA Astrophysics Data System (ADS)
Zakaria, Wan Zafira Ezza Wan; Jamal, Ena; Ali, Jamaludin Md.
2015-10-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data which preserves certain shape properties of the data such as positivity, monotonicity or convexity [1]. The required curves have to be a smooth shape-preserving interpolation. In this paper a rational cubic spline in Ball representation is developed to generate an interpolant that preserves monotonicity. In this paper to control the shape of the interpolant three shape parameters are introduced. The shape parameters in the description of the rational cubic interpolation are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolation are derived and visually the proposed rational cubic interpolant gives a very pleasing result.
Non-rigid surface registration using spherical thin-plate splines.
Zou, Guangyu; Hua, Jing; Muzik, Otto
2007-01-01
Accurate registration of cortical structures plays a fundamental role in statistical analysis of brain images across population. This paper presents a novel framework for the non-rigid intersubject brain surface registration, using conformal structure and spherical thin-plate splines. By resorting to the conformal structure, complete characteristics regarding the intrinsic cortical geometry can be retained as a mean curvature function and a conformal factor function defined on a canonical, spherical domain. In this transformed space, spherical thin-plate splines are firstly used to explicitly match a few prominent homologous landmarks, and in the meanwhile, interpolate a global deformation field. A post-optimization procedure is then employed to further refine the alignment of minor cortical features based on the geometric parameters preserved on the domain. Our experiments demonstrate that the proposed framework is highly competitive with others for brain surface registration and population-based statistical analysis. We have applied our method in the identification of cortical abnormalities in PET imaging of patients with neurological disorders and accurate results are obtained. PMID:18051080
Control theory and splines, applied to signature storage
NASA Technical Reports Server (NTRS)
Enqvist, Per
1994-01-01
In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.
Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa
2013-01-01
Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796
On the functional equivalence of fuzzy inference systems and spline-based networks.
Hunt, K J; Haas, R; Brown, M
1995-06-01
The conditions under which spline-based networks are functionally equivalent to the Takagi-Sugeno-model of fuzzy inference are formally established. We consider a generalized form of basis function network whose basis functions are splines. The result admits a wide range of fuzzy membership functions which are commonly encountered in fuzzy systems design. We use the theoretical background of functional equivalence to develop a hybrid fuzzy-spline net for inverse dynamic modeling of a hydraulically driven robot manipulator. PMID:7496588
Triple Laser Ranging Collocation Experiment At The Grasse Observatory, France
NASA Astrophysics Data System (ADS)
Nicolas, J.; Bonnefond, P.; Laurain, O.; Pierron, F.; Exertier, P.; Mangin, J.-F.; Barlier, F.
At the Grasse observatory, in the southeast of France, we have the opportunity to have 3 independent laser ranging stations very close one to each other (about 20 m). These 3 instruments are: a classical Satellite Laser Ranging (SLR) station, a Lunar Laser Ranging (LLR) station, and the French Transportable Laser Ranging Station (FTLRS). This kind of experiment was first performed to qualify the new performances of the FTLRS after a long period of strong improvements before its departure to Corsica for the oceanographic satellite JASON-1 (2001) calibration and validation campaign dur- ing the first six-month of 2002. But furthermore, we used this unique configuration to estimate instrumental bias for each station. In this talk we will present the main results on SLR, LLR and FTLRS stations obtained with this triple laser ranging collocation experiment performed between September and December 2001.
Full-turn symplectic map from a generator in a Fourier-spline basis
Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.
1993-04-01
Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton`s method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10{sup 7} turns on an IBM RS6000 model 320 workstation, in a run of about one day.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
NASA Technical Reports Server (NTRS)
Eren, K.
1980-01-01
The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.
NASA Astrophysics Data System (ADS)
Erdogan, Eren; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte
2015-04-01
In May 2014 DGFI-TUM (the former DGFI) and the German Space Situational Awareness Centre (GSSAC) started to develop an OPerational Tool for Ionospheric Mapping And Prediction (OPTIMAP); since November 2014 the Institute of Astrophysics at the University of Göttingen (IAG) joined the group as the third partner. This project aims on the computation and prediction of maps of the vertical total electron content (VTEC) and the electron density distribution of the ionosphere on a global scale from both various space-geodetic observation techniques such as GNSS and satellite altimetry as well as Sun observations. In this contribution we present first results, i.e. a near-real time processing framework for generating VTEC maps by assimilating GNSS (GPS, GLONASS) based ionospheric data into a two-dimensional global B-spline approach. To be more specific, the spatial variations of VTEC are modelled by trigonometric B-spline functions in longitude and by endpoint-interpolating polynomial B-spline functions in latitude, respectively. Since B-spline functions are compactly supported and highly localizing our approach can handle large data gaps appropriately and, thus, provides a better approximation of data with heterogeneous density and quality compared to the commonly used spherical harmonics. The presented method models temporal variations of VTEC inside a Kalman filter. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. To approximate the temporal variation of these state vector components as part of the filter the dynamical model has to be set up. The current implementation of the filter allows to select between a random walk process, a Gauss-Markov process and a dynamic process driven by an empirical ionosphere model, e.g. the International Reference Ionosphere (IRI). For running the model ionospheric input data is acquired from terrestrial GNSS networks through online archive systems (such as IGS) with approximately one hour latency. Before feeding the filter with new hourly data, the raw GNSS observations are downloaded and pre-processed via geometry free linear combinations to provide signal delay information including the ionospheric effects and the differential code biases. Next steps will implement further space geodetic techniques and will introduce the Sun observations into the procedure. The final destination is to develop a time dependent model of the electron density based on different geodetic and solar observations.
Toronto, University of
muscle for the purpose of building a data library of reusable, deformable muscles that are reconstructedShape reconstruction and subsequent deformation of soleus muscle models using B-spline solid that can be used to create deformable models of muscle shape. B-spline solids can be used to model skeletal
Rational Spherical Splines For Genus Zero Shape Modeling Ying He, Xianfeng Gu, Hong Qin
Qin, Hong
to a single rational spherical spline whose maximal error deviated from the original data is less than a user at different scales in terms of data size, the number of control points, the user-specified threshold error structures, a single B-spline (and NURBS) patch can represent only simple open sur- faces, cylindrical
Task-Space Trajectories via Cubic Spline Optimization J. Zico Kolter and Andrew Y. Ng
Kolter, J. Zico
Task-Space Trajectories via Cubic Spline Optimization J. Zico Kolter and Andrew Y. Ng Computer plan optimal task-space tra- jectories and fit cubic splines to the trajectories, while obeying many not lead to a particularly nice final trajectory. Existing trajectory optimization techniques [3] can help
ON SAMPLING RELATED PROPERTIES OF B-SPLINE RIESZ SHIDONG LI, ZHENGQING TONG AND DUNYAN YAN
Li, Shidong
theory are closely related, e.g., [3], [4], [5], [7], [8]. Studies of sampling in relation to framesON SAMPLING RELATED PROPERTIES OF B-SPLINE RIESZ SEQUENCES SHIDONG LI, ZHENGQING TONG AND DUNYAN YAN Abstract. For B-spline Riesz sequence subspaces X span{k(·-n) : n Z}, there is an exact sampling
Full-turn symplectic map from a generator in a Fourier-spline basis
Berg, J.S.; Warnock, R.L.; Ruth, R.D.; Forest, E.
1993-04-01
Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been applied to a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10{sup 7} turns on an IBM RS6000 model 320 workstation, in a run of about one day.
Reich, Brian J.; Storlie, Curtis B.; Bondell, Howard D.
2009-01-01
With many predictors, choosing an appropriate subset of the covariates is a crucial, and difficult, step in nonparametric regression. We propose a Bayesian nonparametric regression model for curve-fitting and variable selection. We use the smoothing spline ANOVA framework to decompose the regression function into interpretable main effect and interaction functions. Stochastic search variable selection via MCMC sampling is used to search for models that fit the data well. Also, we show that variable selection is highly-sensitive to hyperparameter choice and develop a technique to select hyperparameters that control the long-run false positive rate. The method is used to build an emulator for a complex computer model for two-phase fluid flow. PMID:19789732
TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)
A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...
An auroral scintillation observation using precise, collocated GPS receivers
NASA Astrophysics Data System (ADS)
Garner, T. W.; Harris, R. B.; York, J. A.; Herbster, C. S.; Minter, C. F., III; Hampton, D. L.
2011-02-01
On 10 January 2009, an unusual ionospheric scintillation event was observed by a Global Positioning System (GPS) receiver station in Fairbanks, Alaska. The receiver station is part of the National Geospatial-Intelligence Agency's (NGA) Monitoring Station Network (MSN). Each MSN station runs two identical geodetic-grade, dual-frequency, full-code tracking GPS receivers that share a common antenna. At the Fairbanks station, a third separate receiver with a separate antenna is located nearby. During the 10 January event, ionospheric conditions caused two of the receivers to loose lock on a single satellite. The third receiver tracked through the scintillation. The region of scintillation was collocated with an auroral arc and a slant total electron content (TEC) increase of 5.71 TECu (TECu = 1016/m2). The response of the full-code tracking receivers to the scintillation is intriguing. One of these receivers lost lock, but the other receiver did not. This fact argues that a receiver's internal state dictates its reaction to scintillation. Additionally, the scintillation only affected the L2 signal. While this causes the L1 signal to be lost on the semicodelessly receiver, the full-code tracking receiver only lost the L1 signal when the receiver attempted to reacquire the satellite link.
Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.
Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei
2014-02-01
Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods. PMID:26270906
Radecki, Peter P; Farinholt, Kevin M; Park, Gyuhae; Bement, Matthew T
2008-01-01
The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.
Lin, Guang; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.
2010-09-01
Due to lack of knowledge or insufficient data, many physical systems are subject to uncertainty. Such uncertainty occurs on a multiplicity of scales. In this study, we conduct the uncertainty analysis of diffusion in random composites with two dominant scales of uncertainty: Large-scale uncertainty in the spatial arrangement of materials and small-scale uncertainty in the parameters within each material. A general two-scale framework that combines random domain decomposition (RDD) and probabilistic collocation method (PCM) on sparse grids to quantify the large and small scales of uncertainty, respectively. Using sparse grid points instead of standard grids based on full tensor products for both the large and small scales of uncertainty can greatly reduce the overall computational cost, especially for random process with small correlation length (large number of random dimensions). For one-dimensional random contact point problem and random inclusion problem, analytical solution and Monte Carlo simulations have been conducted respectively to verify the accuracy of the combined RDD-PCM approach. Additionally, we employed our combined RDD-PCM approach to two- and three-dimensional examples to demonstrate that our combined RDD-PCM approach provides efficient, robust and nonintrusive approximations for the statistics of diffusion in random composites.
NASA Technical Reports Server (NTRS)
Kirkpatrick, J. C.
1976-01-01
A tabulation of selected altitude-correlated values of pressure, density, speed of sound, and coefficient of viscosity for each of six models of the atmosphere is presented in block data format. Interpolation for the desired atmospheric parameters is performed by using cubic spline functions. The recursive relations necessary to compute the cubic spline function coefficients are derived and implemented in subroutine form. Three companion subprograms, which form the preprocessor and processor, are also presented. These subprograms, together with the data element, compose the spline fit atmosphere package. Detailed FLOWGM flow charts and FORTRAN listings of the atmosphere package are presented in the appendix.
Determination of airplane model structure from flight data using splines and stepwise regression
NASA Technical Reports Server (NTRS)
Klein, V.; Batterson, J. G.
1983-01-01
A procedure for the determination of airplane model structure from flight data is presented. The model is based on a polynomial spline representation of the aerodynamic coefficients, and the procedure is implemented by use of a stepwise regression. First, a form of the aerodynamic force and moment coefficients amenable to the utilization of splines is developed. Next, expressions for the splines in one and two variables are introduced. Then the steps in the determination of an aerodynamic model structure and the estimation of parameters are discussed briefly. The focus is on the application to flight data of the techniques developed.
NASA Astrophysics Data System (ADS)
Curà, Francesca; Mura, Andrea
2013-11-01
Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.
ERIC Educational Resources Information Center
Jaen, Maria Moreno
2007-01-01
This paper reports an assessment of the collocational competence of students of English Linguistics at the University of Granada. This was carried out to meet a two-fold purpose. On the one hand, we aimed to establish a solid corpus-driven approach based upon a systematic and reliable framework for the evaluation of collocational competence in…
ERIC Educational Resources Information Center
Krummes, Cedric; Ensslin, Astrid
2015-01-01
Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern…
Sukumar, N.
for cracks analysis are the boundary elements method [3], the boundary collocation method [4], the body force method [5] and the integral equation method [6]. The dislocation method is also often used for cracks
Miniaturized Multi-Band Antenna via Element Collocation
Martin, R. P.
2012-06-01
The resonant frequency of a microstrip patch antenna may be reduced through the addition of slots in the radiating element. Expanding upon this concept in favor of a significant reduction in the tuned width of the radiator, nearly 60% of the antenna metallization is removed, as seen in the top view of the antenna’s radiating element (shown in red, below, left). To facilitate an increase in the gain of the antenna, the radiator is suspended over the ground plane (green) by an air substrate at a height of 0.250? while being mechanically supported by 0.030? thick Rogers RO4003 laminate in the same profile as the element. Although the entire surface of the antenna (red) provides 2.45 GHz operation with insignificant negative effects on performance after material removal, the smaller square microstrip in the middle must be isolated from the additional aperture in order to afford higher frequency operation. A low insertion loss path centered at 2.45 GHz may simultaneously provide considerable attenuation at additional frequencies through the implementation of a series-parallel, resonant reactive path. However, an inductive reactance alone will not permit lower frequency energy to propagate across the intended discontinuity. To mitigate this, a capacitance is introduced in series with the inductor, generating a resonance at 2.45 GHz with minimum forward transmission loss. Four of these reactive pairs are placed between the coplanar elements as shown. Therefore, the aperture of the lower-frequency outer segment includes the smaller radiator while the higher frequency section is isolated from the additional material. In order to avoid cross-polarization losses due to the orientation of a transmitter or receiver in reference to the antenna, circular polarization is realized by a quadrature coupler for each collocated antenna as seen in the bottom view of the antenna (right). To generate electromagnetic radiation concentrically rotating about the direction of propagation, ideally one-half of the power must be delivered to the output of each branch with a phase shift of 90 degrees and identical amplitude. Due to this, each arm of the coupler is spaced ?/4 wavelength apart.
Detecting Pulsatile Hormone Secretions Using Nonlinear Mixed Effects Partial Spline Models
Wang, Yuedong
Detecting Pulsatile Hormone Secretions Using Nonlinear Mixed Effects Partial Spline Models Yu signaling. The identification of episodic releases of hormonal pulse signals constitutes a major emphasis hormone concentration measurements is of critical importance in endocrinology. In this paper, we propose
A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline
NASA Astrophysics Data System (ADS)
Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong
2015-11-01
The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.
NASA Astrophysics Data System (ADS)
Ng-Thow-Hing, Victor; Agur, Anne; Ball, Kevin A.; Fiume, Eugene; McKee, Nancy
1998-05-01
We introduce a mathematical primitive called the B-spline solid that can be used to create deformable models of muscle shape. B-spline solids can be used to model skeletal muscle for the purpose of building a data library of reusable, deformable muscles that are reconstructed from actual muscle data. Algorithms are provided for minimizing shape distortions that may be caused when fitting discrete sampled data to a continuous B-spline solid model. Visible Human image data provides a good indication of the perimeter of a muscle, but is not suitable for providing internal muscle fiber bundle arrangements which are important for physical simulation of muscle function. To obtain these fiber bundle orientations, we obtain 3-D muscle fiber bundle coordinates by triangulating optical images taken from three different camera views of serially dissected human soleus specimens. B-spline solids are represented as mathematical three-dimensional vector functions which can parameterize an enclosed volume as well as its boundary surface. They are based on B-spline basis functions, allowing local deformations via adjustable control points and smooth continuity of shape. After the B-spline solid muscle model is fitted with its external surface and internal volume arrangements, we can subsequently deform its shape to allow simulation of animated muscle tissue.
NASA Astrophysics Data System (ADS)
Bauer, K.; Muñoz, G.; Moeck, I.
2012-05-01
Joint interpretation of models from seismic tomography and inversion of magnetotelluric (MT) data is an efficient approach to determine the lithology of the subsurface. Statistical methods are well established but were developed for only two types of models so far (seismic P velocity and electrical resistivity). We apply self-organizing maps (SOMs), which have no limitations in the number of parameters considered in the joint interpretation. Our SOM method includes (1) generation of data vectors from the seismic and MT images, (2) unsupervised learning, (3) definition of classes by algorithmic segmentation of the SOM using image processing techniques and (4) application of learned knowledge to classify all data vectors and assign a lithological interpretation for each data vector. We apply the workflow to collocated P velocity, vertical P-velocity gradient and resistivity models derived along a 40 km profile around the geothermal site Groß Schönebeck in the Northeast German Basin. The resulting lithological model consists of eight classes covering Cenozoic, Mesozoic and Palaeozoic sediments down to 5 km depth. There is a remarkable agreement between the litho-type distribution from the SOM analysis and regional marker horizons interpolated from sparse 2-D industrial reflection seismic data. The most interesting features include (1) characteristic properties of the Jurassic (low P-velocity gradients, low resistivity values) interpreted as the signature of shales, and (2) a pattern within the Upper Permian Zechstein layer with low resistivity and increased P-velocity values within the salt depressions and increased resistivity and decreased P velocities in the salt pillows. The latter is explained in our interpretation by flow of less dense salt matrix components to form the pillows while denser and more brittle evaporites such as anhydrite remain in place during the salt mobilization.
Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring
NASA Astrophysics Data System (ADS)
Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin
2015-08-01
In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.
Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1994-01-01
This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuators and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.
Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1994-01-01
This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuator and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.
Simulation of SeaWinds Measurements in the Presence of Rain Using Collocated TRMM PR Data
Long, David G.
Simulation of SeaWinds Measurements in the Presence of Rain Using Collocated TRMM PR Data David W falling rain droplets along with ocean surface perturbations due to rain change the backscat- ter signature of the waves induced by near-surface winds. A sim- ple model incorporates the effects of rain
Collocated Interaction with Flying Robots Wai Shan (Florence) Ng, Ehud Sharlin
Sharlin, Ehud
interaction, Wizard of Oz Evaluation I. INTRODUCTION Controlling unmanned aerial vehicles (UAVs, or drones, controllers and pre-planned flight path programs are the norm for controlling drones and much research has engage with other collocated people or animals. Currently drones are mostly used for remote surveillance
Collocated Interaction with Flying Robots Wai Shan (Florence) Ng, Ehud Sharlin
Sharlin, Ehud
interaction, Wizard of Oz Evaluation I. INTRODUCTION ONTROLLING unmanned aerial vehicles (UAVs, or drones, controllers and pre-planned flight path programs are the norm for controlling drones and much research has engage with other collocated people or animals. Currently drones are mostly used for remote surveillance
Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers
ERIC Educational Resources Information Center
Sadeghi, Karim
2009-01-01
Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at…
Utilizing Lexical Data from a Web-Derived Corpus to Expand Productive Collocation Knowledge
ERIC Educational Resources Information Center
Wu, Shaoqun; Witten, Ian H.; Franken, Margaret
2010-01-01
Collocations are of great importance for second language learners, and a learner's knowledge of them plays a key role in producing language fluently (Nation, 2001: 323). In this article we describe and evaluate an innovative system that uses a Web-derived corpus and digital library software to produce a vast concordance and present it in a way…
The thirteenth AAAI Mobile Robot Competition and Exhibition was once again collocated with
Maxwell, Bruce
The thirteenth AAAI Mobile Robot Competition and Exhibition was once again collocated with AAAI events. T he thirteenth AAAI Mobile Robot Com- petition and Exhibition was held last Ju- ly, California. The prima- ry purpose of the Mobile Robot Competition and Exhibition is to bring together
Code of Federal Regulations, 2010 CFR
2010-10-01
... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...
Strategies in Translating Collocations in Religious Texts from Arabic into English
ERIC Educational Resources Information Center
Dweik, Bader S.; Shakra, Mariam M. Abu
2010-01-01
The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…
Mamoulis, Nikos
, vultures, etc.), we intro- duce the problem of discovering collocation episodes in them (e.g., if a puma is moving near a deer, then a vulture is also going to move close to the same deer with high probabil- ity is moving close to a deer for 1 minute, we expect that a vulture will also move near to this deer in 3
Frequent Collocates and Major Senses of Two Prepositions in ESL and ENL Corpora
ERIC Educational Resources Information Center
Nkemleke, Daniel
2009-01-01
This contribution assesses in quantitative terms frequent collocates and major senses of "between" and "through" in the corpus of Cameroonian English (CCE), the corpus of East-African (Kenya and Tanzania) English which is part of the International Corpus of English (ICE) project (ICE-EA), and the London Oslo/Bergen (LOB) corpus of British English.…
A seamless approach towards stochastic modeling: Sparse grid collocation and data
Zabaras, Nicholas J.
models Baskar Ganapathysubramanian1 and Nicholas Zabaras Materials Process Design and Control LaboratoryA seamless approach towards stochastic modeling: Sparse grid collocation and data driven input processes in the presence of multiple sources of uncertainty. A general application of these ideas to many
Jones, Alan G.
observations: Great Slave Lake shear zone, northern Canada David W. Eaton,1 Alan G. Jones,2,3 and Ian J the Great Slave Lake shear zone, northern Canada, investigated lithospheric anisotropy and tested structure inferred from collocated teleseismic and magnetotelluric observations: Great Slave Lake shear zone
Your Participation Is "Greatly/Highly" Appreciated: Amplifier Collocations in L2 English
ERIC Educational Resources Information Center
Edmonds, Amanda; Gudmestad, Aarnes
2014-01-01
The current study sets out to investigate collocational knowledge for a set of 13 English amplifiers among native and nonnative speakers of English, by providing a partial replication of one of the projects reported on in Granger (1998). The project combines both phraseological and distributional approaches to research into formulaic language to…
Collocation and inversion for a reentry optimal control problem Tobias NECKEL 1
trajectory optimization as an enabling technology for versatile real-time trajectory gen- eration efficient trajectory optimization as an enabling technology for versatile real-time trajectory generationCollocation and inversion for a reentry optimal control problem Tobias NECKEL 1 Christophe TALBOT 2
Investigation of Native Speaker and Second Language Learner Intuition of Collocation Frequency
ERIC Educational Resources Information Center
Siyanova-Chanturia, Anna; Spina, Stefania
2015-01-01
Research into frequency intuition has focused primarily on native (L1) and, to a lesser degree, nonnative (L2) speaker intuitions about single word frequency. What remains a largely unexplored area is L1 and L2 intuitions about collocation (i.e., phrasal) frequency. To bridge this gap, the present study aimed to answer the following question: How…
Prostate segmentation in 3D US images using the cardinal-spline-based discrete dynamic contour
NASA Astrophysics Data System (ADS)
Ding, Mingyue; Chen, Congjin; Wang, Yunqiu; Gyacskov, Igor; Fenster, Aaron
2003-05-01
Our slice-based 3D prostate segmentation method comprises of three steps. 2) Boundary deformation. First, we chose more than three points on the boundary of the prostate along one direction and used a Cardinal-spline to interpolate an initial prostate boundary, which has been divided into vertices. At each vertex, the internal and external forces were calculated. These forces drived the evolving contour to the true boundary of the prostate. 3) 3D prostate segmentation. We propoaged the final contour in the initial slice to adjacent slices and refined them until all prostate boundaries of slices are segmented. Finally, we calculated the volume of the prostate from a 3D mesh surface of the prostate. Experiments with the 3D US images of six patient prostates demonstrated that our method efficiently avoided being trapped in local minima and the average percentage error was 4.8%. In 3D prostate segementation, the average percentage error in measuring the prostate volume is less than 5%, with respect to the manual planimetry.
Time Varying Compensator Design for Reconfigurable Structures Using Non-Collocated Feedback
NASA Technical Reports Server (NTRS)
Scott, Michael A.
1996-01-01
Analysis and synthesis tools are developed to improved the dynamic performance of reconfigurable nonminimum phase, nonstrictly positive real-time variant systems. A novel Spline Varying Optimal (SVO) controller is developed for the kinematic nonlinear system. There are several advantages to using the SVO controller, in which the spline function approximates the system model, observer, and controller gain. They are: The spline function approximation is simply connected, thus the SVO controller is more continuous than traditional gain scheduled controllers when implemented on a time varying plant; ft is easier for real-time implementations in storage and computational effort; where system identification is required, the spline function requires fewer experiments, namely four experiments; and initial startup estimator transients are eliminated. The SVO compensator was evaluated on a high fidelity simulation of the Shuttle Remote Manipulator System. The SVO controller demonstrated significant improvement over the present arm performance: (1) Damping level was improved by a factor of 3; and (2) Peak joint torque was reduced by a factor of 2 following Shuttle thruster firings.
Explosion Source Location Study Using Collocated Acoustic and Seismic Networks in Israel
NASA Astrophysics Data System (ADS)
Pinsky, V.; Gitterman, Y.; Arrowsmith, S.; Ben-Horin, Y.
2013-12-01
We explore a joined analysis of seismic and infrasonic signals for improvement in automatic monitoring of small local/regional events, such as construction and quarry blasts, military chemical explosions, sonic booms, etc. using collocated seismic and infrasonic networks recently build in Israel (ISIN) in the frame of the project sponsored by the Bi-national USA-Israel Science Foundation (BSF). The general target is to create an automatic system, which will provide detection, location and identification of explosions in real-time or close-to-real time manner. At the moment the network comprises 15 stations hosting a microphone and seismometer (or accelerometer), operated by the Geophysical Institute of Israel (GII), plus two infrasonic arrays, operated by the National Data Center, Soreq: IOB in the South (Negev desert) and IMA in the North of Israel (Upper Galilee),collocated with the IMS seismic array MMAI. The study utilizes a ground-truth data-base of numerous Rotem phosphate quarry blasts, a number of controlled explosions for demolition of outdated ammunitions and experimental surface explosions for a structure protection research, at the Sayarim Military Range. A special event, comprising four military explosions in a neighboring country, that provided both strong seismic (up to 400 km) and infrasound waves (up to 300 km), is also analyzed. For all of these events the ground-truth coordinates and/or the results of seismic location by the Israel Seismic Network (ISN) have been provided. For automatic event detection and phase picking we tested the new recursive picker, based on Statistically optimal detector. The results were compared to the manual picks. Several location techniques have been tested using the ground-truth event recordings and the preliminary results obtained have been compared to the ground-truth locations: 1) a number of events have been located as intersection of azimuths estimated using the wide-band F-K analysis technique applied to the infrasonic phases of the two distant arrays; 2) a standard robust grid-search location procedure based on phase picks and a constant celerity for a phase (tropospheric or stratospheric) was applied; 3) a joint coordinate grid-search procedure using array waveforms and phase picks was tested, 4) the Bayesian Infrasonic Source Localization (BISL) method, incorporating semi-empirical model-based prior information, was modified for array+network configuration and applied to the ground-truth events. For this purpose we accumulated data of the former observations of the air-to-ground infrasonic phases to compute station specific ground-truth Celerity-Range Histograms (ssgtCRH) and/or model-based CRH (mbCRH), which allow to essentially improve the location results. For building the mbCRH the local meteo-data and the ray-tracing modeling in 3 available azimuth ranges, accounting seasonal variations of winds directivity (quadrants North:315-45, South: 135-225, East 45-135) have been used.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise
2003-01-01
This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.
Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting
NASA Astrophysics Data System (ADS)
Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.
2012-02-01
We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.
Regional VTEC Modeling over Turkey Using MARS (Multivariate Adaptive Regression Splines)
NASA Astrophysics Data System (ADS)
Onur Karslioglu, Mahmut; Durmaz, Murat; Nohutcu, Metin
2010-05-01
It is generally known that spherical harmonic representation of the Ionosphere is not suitable for the local and regional applications. Additionally, irregular data and gaps cause also numerical difficulties in the modeling of the ionosphere. We propose an efficient algorithm based on the Multivariate Adaptive Regression Splines (MARS) to represent a new non-parametric modelling for regional spatio-temporal mapping of the ionospheric electron density. MARS can generally process very large data sets of observations and is an adaptive and flexible method, which can be applied to both linear and non-linear problems. The basis functions are derived directly from the observations and have space partitioning property, which results in an adaptive model. This property helps avoid numerical problems and computational inefficiency caused by the number of coefficients, which has to be increased to detect the local variations of the ionosphere. The model complexity can be controlled by the user via limiting the maximal number of coefficients and the order of products of the basis functions. In this study the MARS algorithm is applied to real data sets over Turkey for regional ionosphere modelling.
Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings
Guo, Y.; Keller, J.; Errichello, R.; Halse, C.
2013-12-01
Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.
Minimum fuel coplanar aeroassisted orbital transfer using collocation and nonlinear programming
NASA Technical Reports Server (NTRS)
Shi, Yun Yuan; Young, D. H.
1991-01-01
The fuel optimal control problem arising in coplanar orbital transfer employing aeroassisted technology is addressed. The mission involves the transfer from high energy orbit (HEO) to low energy orbit (LEO) without plane change. The basic approach here is to employ a combination of propulsive maneuvers in space and aerodynamic maneuvers in the atmosphere. The basic sequence of events for the coplanar aeroassisted HEO to LEO orbit transfer consists of three phases. In the first phase, the transfer begins with a deorbit impulse at HEO which injects the vehicle into a elliptic transfer orbit with perigee inside the atmosphere. In the second phase, the vehicle is optimally controlled by lift and drag modulation to satisfy heating constraints and to exit the atmosphere with the desired flight path angle and velocity so that the apogee of the exit orbit is the altitude of the desired LEO. Finally, the second impulse is required to circularize the orbit at LEO. The performance index is maximum final mass. Simulation results show that the coplanar aerocapture is quite different from the case where orbital plane changes are made inside the atmosphere. In the latter case, the vehicle has to penetrate deeper into the atmosphere to perform the desired orbital plane change. For the coplanar case, the vehicle needs only to penetrate the atmosphere deep enough to reduce the exit velocity so the vehicle can be captured at the desired LEO. The peak heating rates are lower and the entry corridor is wider. From the thermal protection point of view, the coplanar transfer may be desirable. Parametric studies also show the maximum peak heating rates and the entry corridor width are functions of maximum lift coefficient. The problem is solved using a direct optimization technique which uses piecewise polynomial representation for the states and controls and collocation to represent the differential equations. This converts the optimal control problem into a nonlinear programming problem which is solved numerically by using a modified version of NPSOL. Solutions were obtained for the described problem for cases with and without heating constraints. The method appears to be more robust than other optimization methods. In addition, the method can handle complex dynamical constraints.
Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.
2012-04-15
Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.
Penalized splines for smooth representation of high-dimensional Monte Carlo datasets
NASA Astrophysics Data System (ADS)
Whitehorn, Nathan; van Santen, Jakob; Lafebre, Sven
2013-09-01
Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique (Marx and Eilers, 1996) [1] to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.
Recovering coseismic point ground tilts from collocated high-rate GPS and accelerometers
NASA Astrophysics Data System (ADS)
Geng, Jianghui; Melgar, Diego; Bock, Yehuda; Pantoli, Elide; Restrepo, José
2013-10-01
Rotational along with translational and strain measurements are essential for a complete description of the motion of a deformable body in a seismic event. We propose a new seismogeodetic approach where collocated high-rate GPS and accelerometer measurements are combined to estimate permanent and dynamic coseismic ground tilts at a point, whereas at present, only dynamic tilts are measured with either a dense seismic array or an expensive ring laser gyroscope. We estimate point tilts for a five-story structure on a shake table subjected to 13 earthquake strong motion records of increasing intensity. For the most intense record from the 2002 M7.9 Denali earthquake, we observe a peak-to-peak dynamic tilt of 0.12° and a permanent tilt of 0.16° for the structure's roof. Point tilts derived from networks of collocated GPS and accelerometers can be used to estimate the rotational component of the seismic wavefield for improved earthquake source characterization.
Material approximation of data smoothing and spline curves inspired by slime mould.
Jones, Jeff; Adamatzky, Andrew
2014-09-01
The giant single-celled slime mould Physarum polycephalum is known to approximate a number of network problems via growth and adaptation of its protoplasmic transport network and can serve as an inspiration towards unconventional, material-based computation. In Physarum, predictable morphological adaptation is prevented by its adhesion to the underlying substrate. We investigate what possible computations could be achieved if these limitations were removed and the organism was free to completely adapt its morphology in response to changing stimuli. Using a particle model of Physarum displaying emergent morphological adaptation behaviour, we demonstrate how a minimal approach to collective material computation may be used to transform and summarise properties of spatially represented datasets. We find that the virtual material relaxes more strongly to high-frequency changes in data, which can be used for the smoothing (or filtering) of data by approximating moving average and low-pass filters in 1D datasets. The relaxation and minimisation properties of the model enable the spatial computation of B-spline curves (approximating splines) in 2D datasets. Both clamped and unclamped spline curves of open and closed shapes can be represented, and the degree of spline curvature corresponds to the relaxation time of the material. The material computation of spline curves also includes novel quasi-mechanical properties, including unwinding of the shape between control points and a preferential adhesion to longer, straighter paths. Interpolating splines could not directly be approximated due to the formation and evolution of Steiner points at narrow vertices, but were approximated after rectilinear pre-processing of the source data. This pre-processing was further simplified by transforming the original data to contain the material inside the polyline. These exemplary results expand the repertoire of spatially represented unconventional computing devices by demonstrating a simple, collective and distributed approach to data and curve smoothing. PMID:24979075
Design and Application of a Collocated Capacitance Sensor for Magnetic Bearing Spindle
NASA Technical Reports Server (NTRS)
Shin, Dongwon; Liu, Seon-Jung; Kim, Jongwon
1996-01-01
This paper presents a collocated capacitance sensor for magnetic bearings. The main feature of the sensor is that it is made of a specific compact printed circuit board (PCB). The signal processing unit has been also developed. The results of the experimental performance evaluation on the sensitivity, resolution and frequency response of the sensor are presented. Finally, an application example of the sensor to the active control of a magnetic bearing is described.
Forsberg, C.; Lewis, L.
2013-07-01
It is an accident of history that the current model of the fuel cycle is a separate set of facilities connected by transportation. The question is whether collocation and integration of reprocessing and fuel fabrication with the repository significantly reduce the costs of a closed fuel cycle while improving system performance in terms of safety and long-term repository performance. This paper examines the question in terms of higher-level functional requirements of reprocessing systems and geological repositories.
Generation of Knot Net for Calculation of Quadratic Triangular B-spline Surface of Human Head
NASA Astrophysics Data System (ADS)
Mihalík, Ján
2011-09-01
This paper deals with calculation of the quadratic triangular B-spline surface of the human head for the purpose of its modeling in the standard videocodec MPEG-4 SNHC. In connection with this we propose an algorithm of generation of the knot net and present the results of its application for triangulation of the 3D polygonal model Candide. Then for the model and generated knot net as well as an established distribution of control points we show the results of the calculated quadratic triangular B-spline surface of the human head including its textured version for the texture of the selected avatar.
Ma, Chien-Ching; Chuang, Kuo-Chih; Pan, Shan-Ying
2011-12-01
Polyvinylidene fluoride (PVDF) films are light, flexible, and have high piezoelectricity. Because of these advantages, they have been widely used as sensors in applications such as underwater investigation, nondestructive damage detection, robotics, and active vibration suppression. PVDF sensors are especially preferred over conventional strain gauges in active vibration control because the PVDF sensors are easy to cut into different sizes or shapes as piezoelectric actuators and they can then be placed as collocated pairs. In this work, to focus on demonstrating the dynamic sensing performance of the PVDF film sensor, we revisit the active vibration control problem of a cantilever beam using a collocated lead zirconate titanate (PZT) actuator/PVDF film sensor pair. Before applying active vibration control, the measurement characteristics of the PVDF film sensor are studied by simultaneous comparison with a strain gauge. The loading effect of the piezoelectric actuator on the cantilever beam is also investigated in this paper. Finally, four simple, robust active vibration controllers are employed with the collocated PZT/PVDF pair to suppress vibration of the cantilever beam subjected to impact loadings. The four controllers are the velocity feedback controller, the integral resonant controller (IRC), the resonant controller, and the positive position feedback (PPF) controller. Suppression of impact disturbances is especially suitable for the purpose of demonstrating the dynamic sensing performance of the PVDF sensor. The experimental results also provide suggestions for choosing between the previously mentioned controllers, which have been proven to be effective in suppressing impact-induced vibrations. PMID:23443690
NASA Technical Reports Server (NTRS)
Robbins, J. W.
1985-01-01
An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.
NASA Astrophysics Data System (ADS)
Finger, F.; Werner, F.; Klingebiel, M.; Ehrlich, A.; Jäkel, E.; Voigt, M.; Borrmann, S.; Spichtinger, P.; Wendisch, M.
2015-07-01
Spectral optical layer properties of cirrus are derived from simultaneous and vertically collocated measurements of spectral upward and downward solar irradiance above and below the cloud layer and concurrent in situ microphysical sampling. From the irradiance data spectral transmissivity, absorptivity, reflectivity, and cloud top albedo of the observed cirrus layer are obtained. At the same time microphysical properties of the cirrus were sampled. The close collocation of the radiative and microphysical measurements, above, beneath and inside the cirrus, is obtained by using a research aircraft (Learjet 35A) in tandem with a towed platform called AIRTOSS (AIRcraft TOwed Sensor Shuttle). AIRTOSS can be released from and retracted back to the research aircraft by means of a cable up to a distance of 4 km. Data were collected in two field campaigns above the North and Baltic Sea in spring and late summer 2013. Exemplary results from one measuring flight are discussed also to illustrate the benefits of collocated sampling. Based on the measured cirrus microphysical properties, radiative transfer simulations were applied to quantify the impact of cloud particle properties such as crystal shape, effective radius reff, and optical thickness ? on cirrus optical layer properties. The effects of clouds beneath the cirrus are evaluated in addition. They cause differences in the layer properties of the cirrus by a factor of 2 to 3, and for cirrus radiative forcing by up to a factor of 4. If low-level clouds below cirrus are not considered the solar cooling due to the cirrus is significantly overestimated.
Technology Transfer Automated Retrieval System (TEKTRAN)
Cubic splines can be used to model fixed and random effects of lactation curves. A total of 64,138 test-day observations for first lactation Holstein cows recorded as treated with bovine somatotropin (bST) and 138,008 test-day observations for untreated cows were obtained from Dairy Records Manageme...
Nonparametric Log--Spectrum Estimation using Disconnected Regression Splines and Genetic Algorithms
Lee, Thomas
selection criterion for choosing a ``best'' fitting model, and 3) a genetic algorithm for effectivelyNonparametric Log--Spectrum Estimation using Disconnected Regression Splines and Genetic Algorithms of estimates for testing spectra 3 and 4 . . . . . . . . . . . . . . . . . 25 Keywords: genetic algorithms, log
Modeling Respiratory Mechanics in the MCAT and Spline-Based MCAT Phantoms W. Paul Segars1
Duncan, James S.
. The changes both phantoms underwent were splined over time to create time continuous 4D respiratory models. We cavity. Respiratory motion can cause artifacts in myocardial SPECT leading to the misinterpretation,10]. Unfortunately, it is limited in its ability to realistically model anatomical variations and patient motion
Detecting Pulsatile Hormone Secretions Using Nonlinear Mixed E#ects Partial Spline Models
Wang, Yuedong
Detecting Pulsatile Hormone Secretions Using Nonlinear Mixed E#ects Partial Spline Models Yu signaling. The identification of episodic releases of hormonal pulse signals constitutes a major emphasis hormone concentration measurements is of critical importance in endocrinology. In this paper, we propose
Modeling of Hormone Secretion-Generating Mechanisms With Splines: A Pseudo-Likelihood
Wang, Yuedong
Modeling of Hormone Secretion-Generating Mechanisms With Splines: A Pseudo-Likelihood Approach Anna for the investigation of underlying hormone secretion-generating mechanism. Characterizing hormone time series is a difficult task as most hormones are secreted in a pulsatile manner and pulses are often masked by the slow
Recovering Risk-Neutral Probability Density Functions from Options Prices using Cubic Splines
TÃ¼tÃ¼ncÃ¼, Reha
Recovering Risk-Neutral Probability Density Functions from Options Prices using Cubic Splines Ana to estimate the risk-neutral probability density function (pdf) of the future prices of an underlying asset by numerical optimiza- tion software. In the quadratic programming formulation the positivity of the risk-neutral
Li, Ke
2012-02-14
with the Inefficiency Analysis technique. In the last essay we extended the models developed in the previous two essays with regression spline and let the data decide how flexible or complicated the model should be. We showed the improvement of deterministic frontier...
Representation of Spatial Functions in Geodesy Using B-Spline Wavelets with Compact Support
Representation of Spatial Functions in Geodesy Using B-Spline Wavelets with Compact Support Rainer of Astronomical, Physical and Mathematical Geodesy, Technical University of Berlin, Germany 3 German Geodetic product 1 Introduction In geodesy one of the principal research foci is the efficient representation
A Thin-Plate Spline Calibration Model for Fingerprint Sensor Interoperability
Ross, Arun Abraham
A Thin-Plate Spline Calibration Model for Fingerprint Sensor Interoperability Arun Ross, Member fingerprint, face, iris, and speech. In the context of fingerprint technology, variations are observed in the acquired images of a fingerprint due to differences in sensor resolution, scanning area, sensing technology
Fast selection of spectral variables with B-spline compression Fabrice Rossi a,, Damien Francois b
Verleysen, Michel
Fast selection of spectral variables with B-spline compression Fabrice Rossi a,, Damien Francois b, for example, a near-infrared spectrum. Such problems may be encountered in the food [1], pharmaceu- tical [2] and textile [3] industry, to cite only a few. Viewed from a statistical or data analysis perspective, the main
Test Score Reporting Referenced to Doubly-Moderated Cut Scores Using Splines
ERIC Educational Resources Information Center
Schafer, William D.; Hou, Xiaodong
2011-01-01
This study discusses and presents an example of a use of spline functions to establish and report test scores using a moderated system of any number of cut scores. Our main goals include studying the need for and establishing moderated standards and creating a reporting scale that is referenced to all the standards. Our secondary goals are to make…
Estimating snow microphysical properties using collocated multisensor observations
NASA Astrophysics Data System (ADS)
Wood, Norman B.; L'Ecuyer, Tristan S.; Heymsfield, Andrew J.; Stephens, Graeme L.; Hudak, David R.; Rodriguez, Peter
2014-07-01
The ability of ground-based in situ and remote sensing observations to constrain microphysical properties for dry snow is examined using a Bayesian optimal estimation retrieval method. Power functions describing the variation of mass and horizontally projected area with particle size and a parameter related to particle shape are retrieved from near-Rayleigh radar reflectivity, particle size distribution, snowfall rate, and size-resolved particle fall speeds. Algorithm performance is explored in the context of instruments deployed during the Canadian CloudSat CALIPSO Validation Project, but the algorithm is adaptable to other similar combinations of sensors. Critical estimates of observational and forward model uncertainties are developed and used to quantify the performance of the method using synthetic cases developed from actual observations of snow events. In addition to illustrating the technique, the results demonstrate that this combination of sensors provides useful constraints on the mass parameters and on the coefficient of the area power function but only weakly constrains the exponent of the area power function and the shape parameter. Information content metrics show that about two independent quantities are measured by the suite of observations and that the method is able to resolve about eight distinct realizations of the state vector containing the mass and area power function parameters. Alternate assumptions about observational and forward model uncertainties reveal that improved modeling of particle fall speeds could contribute substantial improvements to the performance of the method.
Recovering bridge deflections from collocated acceleration and strain measurements
NASA Astrophysics Data System (ADS)
Bell, M.; Ma, T. W.; Xu, N. S.
2015-04-01
In this research, an internal model based method is proposed to estimate the displacement profile of a bridge subjected to a moving traffic load using a combination of acceleration and strain measurements. The structural response is assumed to be within the linear range. The deflection profile is assumed to be dominated by the fundamental mode of the bridge, therefore only requiring knowledge of the first mode. This still holds true under a multiple vehicle loading situation as the high mode shapes don't impact the over all response of the structure. Using the structural modal parameters and partial knowledge of the moving vehicle load, the internal models of the structure and the moving load can be respectively established, which can be used to form an autonomous state-space representation of the system. The structural displacements, velocities, and accelerations are the states of such a system, and it is fully observable when the measured output contains structural accelerations and strains. Reliable estimates of structural displacements are obtained using the standard Kalman filtering technique. The effectiveness and robustness of the proposed method has been demonstrated and evaluated via numerical simulation of a simply supported single span concrete bridge subjected to a moving traffic load.
Application of collocated GPS and seismic sensors to earthquake monitoring and early warning.
Li, Xingxing; Zhang, Xiaohong; Guo, Bofeng
2013-01-01
We explore the use of collocated GPS and seismic sensors for earthquake monitoring and early warning. The GPS and seismic data collected during the 2011 Tohoku-Oki (Japan) and the 2010 El Mayor-Cucapah (Mexico) earthquakes are analyzed by using a tightly-coupled integration. The performance of the integrated results is validated by both time and frequency domain analysis. We detect the P-wave arrival and observe small-scale features of the movement from the integrated results and locate the epicenter. Meanwhile, permanent offsets are extracted from the integrated displacements highly accurately and used for reliable fault slip inversion and magnitude estimation. PMID:24284765
Application of Collocated GPS and Seismic Sensors to Earthquake Monitoring and Early Warning
Li, Xingxing; Zhang, Xiaohong; Guo, Bofeng
2013-01-01
We explore the use of collocated GPS and seismic sensors for earthquake monitoring and early warning. The GPS and seismic data collected during the 2011 Tohoku-Oki (Japan) and the 2010 El Mayor-Cucapah (Mexico) earthquakes are analyzed by using a tightly-coupled integration. The performance of the integrated results is validated by both time and frequency domain analysis. We detect the P-wave arrival and observe small-scale features of the movement from the integrated results and locate the epicenter. Meanwhile, permanent offsets are extracted from the integrated displacements highly accurately and used for reliable fault slip inversion and magnitude estimation. PMID:24284765
Shiau, J.J.; Wahba, G.; Johnson, D.R.
1986-12-01
A new method, based on partial spline models, is developed for including specified discontinuities in otherwise smooth two- and three-dimensional objective analyses. The method is appropriate for including tropopause height information in two- and three-dimensinal temperature analyses, using the O'Sullivan-Wahba physical variational method for analysis of satellite radiance data, and may in principle be used in a combined variational analysis of observed, forecast, and climate information. A numerical method for its implementation is described and a prototype two-dimensional analysis based on simulated radiosonde and tropopause height data is shown. The method may also be appropriate for other geophysical problems, such as modeling the ocean thermocline, fronts, discontinuities, etc. 39 references.
Energy Science and Technology Software Center (ESTSC)
2012-08-01
Interatomic force and energy calculation subroutine to be used with the molecular dynamics simulation code LAMMPS (Ref a.). The code evaluated the total energy and atomic forces (energy gradient) according to a cubic spline-based variant (Ref b.) of the Modified Embedded Atom Method (MEAM) with a additional Stillinger-Weber (SW) contribution.
Transfer coefficients for evaporation of a system with a Lennard-Jones long-range spline potential
Kjelstrup, Signe
-vapor interface. In experiments the problem becomes at least two dimensional. Phenomena like convection may occur by nonequilibrium molecular dynamics simulations for a Lennard-Jones fluid with a long-range spline potential
Tenderholt, A.; Hedman, B.; Hodgson, K.O.
2007-01-08
PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k{sup 3}-weighted EXAFS data.
NASA Astrophysics Data System (ADS)
Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong
2015-07-01
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.
Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong
2015-07-01
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily. PMID:26232956
Inversion of the strain-life and strain-stress relationships for use in metal fatigue analysis
NASA Technical Reports Server (NTRS)
Manson, S. S.
1979-01-01
The paper presents closed-form solutions (collocation method and spline-function method) for the constants of the cyclic fatigue life equation so that they can be easily incorporated into cumulative damage analysis. The collocation method involves conformity with the experimental curve at specific life values. The spline-function method is such that the basic life relation is expressed as a two-part function, one applicable at strains above the transition strain (strain at intersection of elastic and plastic lines), the other below. An illustrative example is treated by both methods. It is shown that while the collocation representation has the advantage of simplicity of form, the spline-function representation can be made more accurate over a wider life range, and is simpler to use.
NASA Technical Reports Server (NTRS)
Wahba, G.
1982-01-01
Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.
Wetherbee, G.A.; Gay, D.A.; Brunette, R.C.; Sweet, C.W.
2007-01-01
The National Atmospheric Deposition Program/Mercury Deposition Network (MDN) provides long-term, quality-assured records of mercury in wet deposition in the USA and Canada. Interpretation of spatial and temporal trends in the MDN data requires quantification of the variability of the MDN measurements. Variability is quantified for MDN data from collocated samplers at MDN sites in two states, one in Illinois and one in Washington. Median absolute differences in the collocated sampler data for total mercury concentration are approximately 11% of the median mercury concentration for all valid 1999-2004 MDN data. Median absolute differences are between 3.0% and 14% of the median MDN value for collector catch (sample volume) and between 6.0% and 15% of the median MDN value for mercury wet deposition. The overall measurement errors are sufficiently low to resolve between NADP/MDN measurements by ??2 ng??l-1 and ??2 ????m-2?? year-1, which are the contour intervals used to display the data on NADP isopleths maps for concentration and deposition, respectively. ?? Springer Science+Business Media B.V. 2007.
Fortier, Marie-Odile P; Sturm, Belinda S M
2012-10-16
Resource demand analyses indicate that algal biodiesel production would require unsustainable amounts of freshwater and fertilizer supplies. Alternatively, municipal wastewater effluent can be used, but this restricts production of algae to areas near wastewater treatment plants (WWTPs), and to date, there has been no geospatial analysis of the feasibility of collocating large algal ponds with WWTPs. The goals of this analysis were to determine the available areas by land cover type within radial extents (REs) up to 1.5 miles from WWTPs; to determine the limiting factor for algal production using wastewater; and to investigate the potential algal biomass production at urban, near-urban, and rural WWTPs in Kansas. Over 50% and 87% of the land around urban and rural WWTPs, respectively, was found to be potentially available for algal production. The analysis highlights a trade-off between urban WWTPs, which are generally land-limited but have excess wastewater effluent, and rural WWTPs, which are generally water-limited but have 96% of the total available land. Overall, commercial-scale algae production collocated with WWTPs is feasible; 29% of the Kansas liquid fuel demand could be met with implementation of ponds within 1 mile of all WWTPs and supplementation of water and nutrients when these are limited. PMID:22970803
NASA Technical Reports Server (NTRS)
Hastings, E. C., Jr.; Shanks, R. E.; Champine, R. A.; Copeland, W. L.; Young, D. C.
1974-01-01
Flight tests have been conducted to evaluate the effectiveness of a wingtip vortex attenuating device, referred to as a spline. Vortex penetrations were made with a PA-28 behind a C-54 aircraft with and without wingtip splines attached and the resultant rolling acceleration was measured and related to the roll acceleration capability of the PA-28. Tests were conducted over a range of separation distances from about 5 nautical miles (n. mi.) to less than 1 n. mi. Preliminary results indicate that, with the splines installed, there was a significant reduction in the vortex induced roll acceleration experienced by the PA-28 probe aircraft, and the distance at which the PA-28 roll control became ineffective was reduced from 2.5 n. mi. to 0.6 n. mi., or less. There was a slight increase in approach noise (approximately 4 db) with the splines installed due primarily to the higher engine power used during approach. Although splines significantly reduced the C-54 rate of climb, the rates available with four engines were acceptable for this test program. Splines did not introduce any noticeable change in the handling qualities of the C-54.
B-splines and Hermite-Padé approximants to the exponential function
NASA Astrophysics Data System (ADS)
Sablonnière, Paul
2008-10-01
This paper is the continuation of a work initiated in [P. Sablonnière, An algorithm for the computation of Hermite-Padé approximations to the exponential function: divided differences and Hermite-Padé forms. Numer. Algorithms 33 (2003) 443-452] about the computation of Hermite-Padé forms (HPF) and associated Hermite-Padé approximants (HPA) to the exponential function. We present an alternative algorithm for their computation, based on the representation of HPF in terms of integral remainders with B-splines as Peano kernels. Using the good properties of discrete B-splines, this algorithm gives rise to a great variety of representations of HPF of higher orders in terms of HPF of lower orders, and in particular of classical Padé forms. We give some examples illustrating this algorithm, in particular, another way of constructing quadratic HPF already described by different authors. Finally, we briefly study a family of cubic HPF.
Pauchard, Y; Smith, M; Mintchev, M
2004-01-01
Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented. PMID:17272049
B-spline finite elements for plane elasticity problems
Aggarwal, Bhavya
2007-04-25
The finite element method since its development in the 1950Ã¢Â?Â?s has been used extensively in solving complex problems involving partial differential equations. The conventional finite element methods use piecewise Lagrange interpolation functions...
Estimation of Some Parameters from Morse-Morse-Spline-Van Der Waals Intermolecular Potential
Coroiu, I.
2007-04-23
Some parameters such as transport cross-sections and isotopic thermal diffusion factor have been calculated from an improved intermolecular potential, Morse-Morse-Spline-van der Waals (MMSV) potential proposed by R.A. Aziz et al. The treatment was completely classical and no corrections for quantum effects were made. The results would be employed for isotope separations of different spherical and quasi-spherical molecules.
Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.
Oliveira, Francisco P M; Tavares, João Manuel R S
2013-03-01
This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p < 0.001) than the one obtained using the best solution proposed in our previous work. When applied to align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p < 0.001). The consequences of the temporal alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences. PMID:23135784
Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions
Jerome Blair
2008-05-15
An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.
Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A
1996-04-01
This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model. PMID:8641156
A nonrational B-spline profiled horn with high displacement amplification for ultrasonic welding.
Nguyen, Huu-Tu; Nguyen, Hai-Dang; Uan, Jun-Yen; Wang, Dung-An
2014-12-01
A new horn with high displacement amplification for ultrasonic welding is developed. The profile of the horn is a nonrational B-spline curve with an open uniform knot vector. The ultrasonic actuation of the horn exploits the first longitudinal displacement mode of the horn. The horn is designed by an optimization scheme and finite element analyses. Performances of the proposed horn have been evaluated by experiments. The displacement amplification of the proposed horn is 41.4% and 8.6% higher than that of the traditional catenoidal horn and a Bézier-profile horn, respectively, with the same length and end surface diameters. The developed horn has a lower displacement amplification than the nonuniform rational B-spline profiled horn but a much smoother stress distribution. The developed horn, the catenoidal horn, and the Bézier horn are fabricated and used for ultrasonic welding of lap-shear specimens. The bonding strength of the joints welded by the open uniform nonrational B-spline (OUNBS) horn is the highest among the three horns for the various welding parameters considered. The locations of the failure mode and the distribution of the voids of the specimens are investigated to explain the reason of the high bonding strength achieved by the OUNBS horn. PMID:25081407
Nonholonomic motion planning for a free-falling cat using spline approximation
NASA Astrophysics Data System (ADS)
Ge, XinSheng; Guo, ZhengXiong
2012-11-01
An optimal motion planning of a free-falling cat based on the spline approximation is investigated. Nonholonomicity arises in a free-falling cat subjected to nonintegrable velocity constraints or nonintegrable conservation laws. The equation of dynamics of a free-falling cat is obtained by using the model of two symmetric rigid bodies. The control of the system can be converted to the motion planning problem for a driftless system. A cost function is used to incorporate the final errors and control energy. The motion planning is to determine control inputs to minimize the cost function and is formulated as an infinite dimensional optimal control problem. By using the control parameterization, the infinite dimensional optimal control problem can be transformed to a finite dimensional one. The particle swarm optimization (PSO) algorithm with the cubic spline approximation is proposed to solve the finite dimension optimal control problem. The cubic spline approximation is introduced to realize the control parameterization. The resulting controls are smooth and the initial and terminal values of the control inputs are zeros, so they can be easily generated by experiment. Simulations are also performed for the nonholonomic motion planning of a free-falling cat. Simulated experimental results show that the proposed algorithm is more effective than the Newtoian algorithm.
NASA Astrophysics Data System (ADS)
Gutierrez, R. R.; Abad, J. D.; Parsons, D. R.
2011-12-01
The quantification of the variability of bedform geometry is necessary for scientific and practical purposes. For the former purpose, it is necessary for modeling bed roughness cross-strata sets, vertical sorting, sediment transport rates, transition between two-dimensional and three-dimensional dunes, velocity pulsations, flow over bedforms, interaction between flow over bedforms and groundwater, and transport of contaminants. For practical purposes the study of the variability of bedforms is important to predict floods and flow resistance, to predict uplifting of manmade structures underneath a river beds, to track future changes of bedform and biota following dam removal, to estimate the relationship between bedform characteristics and biota, in river restoration, among others. Currently there is not a standard nomenclature and procedure to separate bedform features such as sand waves, dunes and ripples which are commonly present in large rivers. Likewise, there is not a standard definition of the scope for the different scales of such bedform features. The present study proposes a standardization of the nomenclature and symbolic representation of bedform features and elaborates on the combined application of robust spline filter and continuous wavelet transforms to separate the morphodynamic features. A fully automated robust spline procedure for uniformly sampled datasets is used. The algorithm, based on a penalized least squares method, allows fast smoothing of uniformly sampled data elements by means of the discrete cosine transform. The wavelet transforms, which overcome some limitations of the Fourier transforms, are applied to identify the spectrum of bedform wavelengths. The proposed separation method is applied to a 370-m width and 1.028-km length swath bed morphology data of the Parana River, one of the world's largest rivers, located in Argentina. After the separation is carried out, the descriptors (e.g. wavelength, slope, and amplitude for both stoss and lee sides) of the dunes and ripples are statistically analyzed. Thus, a complete hierarchization and quantitative description of the spatial configuration of the bed is obtained. To the best of our knowledge, no previous study has retrieved such information.
ERIC Educational Resources Information Center
Futagi, Yoko; Deane, Paul; Chodorow, Martin; Tetreault, Joel
2008-01-01
This paper describes the first prototype of an automated tool for detecting collocation errors in texts written by non-native speakers of English. Candidate strings are extracted by pattern matching over POS-tagged text. Since learner texts often contain spelling and morphological errors, the tool attempts to automatically correct them in order to…
ERIC Educational Resources Information Center
Molina-Plaza, Silvia; de Gregorio-Godeo, Eduardo
2010-01-01
Within the context of on-going research, this paper explores the pedagogical implications of contrastive analyses of multiword units in English and Spanish based on electronic corpora as a CALL resource. The main tenets of collocations from a contrastive perspective--and the points of contact and departure between both languages--are discussed…
Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki
2015-01-01
Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON). PMID:26404302
Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki
2015-01-01
Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull-Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON). PMID:26404302
Zhang, Zhikun; Zhou, Yingshun; Wang, Hongning; Zeng, Fanya; Yang, Xin; Zhang, Yi; Zhang, Anyun
2016-01-01
Infectious bronchitis virus (IBV) is a highly variable virus with a large number of genotypes. During 2011-2012, nineteen wild IBV strains were isolated in China. Sequence analysis showed that these isolates were divided into five sub-clusters: A2-like, CKCHLDL08I-like, SAIBK-like, KM91-like and TW97/4-like. Phylogenetic analysis based on the 1118 sequences available on line suggested that all IBVs were classified into six clusters. The prevalent strains including all the isolates were in cluster VI with a 0.194-0.259 genetic distance to Mass type vaccines. In addition, we introduced the smoothing spline clustering (SSC) method to estimate the highly variable sites for some sub-clusters. The results showed that highly variable sites range from sub-clusters, the N-terminal sequences of 4/91-like, TW97/4-like and Arkansas-like are more variable than other sub-clusters. This is the first time that the SSC method has been used for the evolution study of IBV. PMID:26494165
Cameron, Andrew; Lui, Dorothy; Boroomand, Ameneh; Glaister, Jeffrey; Wong, Alexander; Bizheva, Kostadinka
2013-01-01
Optical coherence tomography (OCT) allows for non-invasive 3D visualization of biological tissue at cellular level resolution. Often hindered by speckle noise, the visualization of important biological tissue details in OCT that can aid disease diagnosis can be improved by speckle noise compensation. A challenge with handling speckle noise is its inherent non-stationary nature, where the underlying noise characteristics vary with the spatial location. In this study, an innovative speckle noise compensation method is presented for handling the non-stationary traits of speckle noise in OCT imagery. The proposed approach centers on a non-stationary spline-based speckle noise modeling strategy to characterize the speckle noise. The novel method was applied to ultra high-resolution OCT (UHROCT) images of the human retina and corneo-scleral limbus acquired in-vivo that vary in tissue structure and optical properties. Test results showed improved performance of the proposed novel algorithm compared to a number of previously published speckle noise compensation approaches in terms of higher signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and better overall visual assessment. PMID:24049697
Distributed lag and spline modeling for predicting energy expenditure from accelerometry in youth
Chen, Kong Y.; Acra, Sari A.; Buchowski, Maciej S.
2010-01-01
Movement sensing using accelerometers is commonly used for the measurement of physical activity (PA) and estimating energy expenditure (EE) under free-living conditions. The major limitation of this approach is lack of accuracy and precision in estimating EE, especially in low-intensity activities. Thus the objective of this study was to investigate benefits of a distributed lag spline (DLS) modeling approach for the prediction of total daily EE (TEE) and EE in sedentary (1.0–1.5 metabolic equivalents; MET), light (1.5–3.0 MET), and moderate/vigorous (?3.0 MET) intensity activities in 10- to 17-year-old youth (n = 76). We also explored feasibility of the DLS modeling approach to predict physical activity EE (PAEE) and METs. Movement was measured by Actigraph accelerometers placed on the hip, wrist, and ankle. With whole-room indirect calorimeter as the reference standard, prediction models (Hip, Wrist, Ankle, Hip+Wrist, Hip+Wrist+Ankle) for TEE, PAEE, and MET were developed and validated using the fivefold cross-validation method. The TEE predictions by these DLS models were not significantly different from the room calorimeter measurements (all P > 0.05). The Hip+Wrist+Ankle predicted TEE better than other models and reduced prediction errors in moderate/vigorous PA for TEE, MET, and PAEE (all P < 0.001). The Hip+Wrist reduced prediction errors for the PAEE and MET at sedentary PA (P = 0.020 and 0.021) compared with the Hip. Models that included Wrist correctly classified time spent at light PA better than other models. The means and standard deviations of the prediction errors for the Hip+Wrist+Ankle and Hip were 0.4 ± 144.0 and 1.5 ± 164.7 kcal for the TEE, 0.0 ± 84.2 and 1.3 ± 104.7 kcal for the PAEE, and ?1.1 ± 97.6 and ?0.1 ± 108.6 MET min for the MET models. We conclude that the DLS approach for accelerometer data improves detailed EE prediction in youth. PMID:19959770
Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models
Gomez, Hector
for phase-field models. Our algorithms are based on Isogeometric Analysis, a new technology that makes use fluids [23, 24]. Since20 then, it has been applied to a variety of physical problems, including planet impact on condensed matter physics [36, 37], fluid mechanics [16, 44], and solid mechanics [20, 61, 62
Kirby, Mike
displacement on the ECG. The goal of this study was to evaluate the impact of positional changes of the heartCardiac Position Sensitivity Study in the Electrocardiographic Forward Problem Using Stochastic--The electrocardiogram (ECG) is ubiquitously employed as a diagnostic and monitoring tool for patients experiencing
Baldi, F; Alencar, M M; Albuquerque, L G
2010-12-01
The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. PMID:21077967
NASA Technical Reports Server (NTRS)
Valero, Francisco P. J.; Cess, Robert D.; Zhang, Minghua; Pope, Shelly K.; Bucholtz, Anthony; Bush, Brett; Vitko, John, Jr.
1997-01-01
As part of the Atmospheric Radiation Measurement (ARM) Enhanced Shortwave Experiment (ARESE), we have obtained and analyzed measurements made from collocated aircraft of the absorption of solar radiation within the atmospheric column between the two aircraft. The measurements were taken during October 1995 at the ARM site in Oklahoma. Relative to a theoretical radiative transfer model, we find no evidence for excess solar absorption in the clear atmosphere and significant evidence for its existence in the cloudy atmosphere. This excess cloud solar absorption appears to occur in both visible (0.224-0.68 microns) and near-infrared (0.68-3.30 microns) spectral regions, although not at 0.5 microns for the visible contribution, and it is shown to be true absorption rather than an artifact of sampling errors caused by measuring three-dimensional clouds.
NASA Astrophysics Data System (ADS)
Crow, Wade T.; Lei, Fangni; Hain, Christopher; Anderson, Martha C.; Scott, Russell L.; Billesbach, David; Arkebauer, Timothy
2015-10-01
Land surface models (LSMs) are often applied to predict the one-way coupling strength between surface soil moisture (SM) and latent heat (LH) flux. However, the ability of LSMs to accurately represent such coupling has not been adequately established. Likewise, the estimation of SM/LH coupling strength using ground-based observational data is potentially compromised by the impact of independent SM and LH measurements errors. Here we apply a new statistical technique to acquire estimates of one-way SM/LH coupling strength which are nonbiased in the presence of random error using a triple collocation approach based on leveraging the simultaneous availability of independent SM and LH estimates acquired from (1) LSMs, (2) satellite remote sensing, and (3) ground-based observations. Results suggest that LSMs do not generally overestimate the strength of one-way surface SM/LH coupling.
Maltsev, I A; Tupitsyn, I I; Shabaev, V M; Kozhedub, Y S; Plunien, G; Stoehlker, Th
2013-01-01
A new approach for solving the time-dependent two-center Dirac equation is presented. The method is based on using the finite basis set of cubic Hermite splines on a two-dimensional lattice. The Dirac equation is treated in rotating reference frame. The collision of U92+ (as a projectile) and U91+ (as a target) is considered at energy E_lab=6 MeV/u. The charge transfer probabilities are calculated for different values of the impact parameter. The obtained results are compared with the previous calculations [I. I. Tupitsyn et al., Phys. Rev. A 82, 042701 (2010)], where a method based on atomic-like Dirac-Sturm orbitals was employed. This work can provide a new tool for investigation of quantum electrodynamics effects in heavy-ion collisions near the supercritical regime.
NASA Astrophysics Data System (ADS)
Alemohammad, S. H.; McColl, K. A.; Konings, A. G.; Entekhabi, D.; Stoffelen, A.
2015-02-01
Validation of precipitation estimates from various products is a challenging problem, since the true precipitation is unknown. However, with the increased availability of precipitation estimates from a wide range of instruments (satellite, ground-based radar, and gauge), it is now possible to apply the Triple Collocation (TC) technique to characterize the uncertainties in each of the products. Classical TC takes advantage of three collocated data products of the same variable and estimates the mean squared error of each, without requiring knowledge of the truth. In this study, triplets among NEXRAD-IV, TRMM 3B42, GPCP and GPI products are used to quantify the associated spatial error characteristics across a central part of the continental US. This is the first study of its kind to explore precipitation estimation errors using TC across the United States (US). A multiplicative (logarithmic) error model is incorporated in the original TC formulation to relate the precipitation estimates to the unknown truth. For precipitation application, this is more realistic than the additive error model used in the original TC derivations, which is generally appropriate for existing applications such as in the case of wind vector components and soil moisture comparisons. This study provides error estimates of the precipitation products that can be incorporated into hydrological and meteorological models, especially those used in data assimilation. Physical interpretations of the error fields (related to topography, climate, etc) are explored. The methodology presented in this study could be used to quantify the uncertainties associated with precipitation estimates from each of the constellation of GPM satellites. Such quantification is prerequisite to optimally merging these estimates.
NASA Astrophysics Data System (ADS)
Alemohammad, S. H.; McColl, K. A.; Konings, A. G.; Entekhabi, D.; Stoffelen, A.
2015-08-01
Validation of precipitation estimates from various products is a challenging problem, since the true precipitation is unknown. However, with the increased availability of precipitation estimates from a wide range of instruments (satellite, ground-based radar, and gauge), it is now possible to apply the triple collocation (TC) technique to characterize the uncertainties in each of the products. Classical TC takes advantage of three collocated data products of the same variable and estimates the mean squared error of each, without requiring knowledge of the truth. In this study, triplets among NEXRAD-IV, TRMM 3B42RT, GPCP 1DD, and GPI products are used to quantify the associated spatial error characteristics across a central part of the continental US. Data are aggregated to biweekly accumulations from January 2002 through April 2014 across a 2° × 2° spatial grid. This is the first study of its kind to explore precipitation estimation errors using TC across the US. A multiplicative (logarithmic) error model is incorporated in the original TC formulation to relate the precipitation estimates to the unknown truth. For precipitation application, this is more realistic than the additive error model used in the original TC derivations, which is generally appropriate for existing applications such as in the case of wind vector components and soil moisture comparisons. This study provides error estimates of the precipitation products that can be incorporated into hydrological and meteorological models, especially those used in data assimilation. Physical interpretations of the error fields (related to topography, climate, etc.) are explored. The methodology presented in this study could be used to quantify the uncertainties associated with precipitation estimates from each of the constellations of GPM satellites. Such quantification is prerequisite to optimally merging these estimates.
NASA Astrophysics Data System (ADS)
Zyta Hakuba, Maria; Folini, Doris; Wild, Martin; Schaepmann-Strub, Gabriela
2014-05-01
Solar radiation is the primary source of energy for the Earth's climate system. While the incoming and outgoing solar fluxes at the top-of-atmosphere can be quantified with high accuracy, large uncertainties still exist in the partitioning of solar absorption between surface and atmosphere. To compute best estimates of absorbed solar radiation at the surface and within the atmosphere representative for Europe during 2000-2010, we combine ground-based observations of surface downwelling solar radiation (GEBA, BSRN) with collocated satellite-retrieved surface albedo (MODIS) and top-of-atmosphere net irradiance (CERES EBAF, 1° resolution). The combination of these datasets over European land yields best estimates of annual mean surface and atmospheric absorption of 117 ±6 Wm¯² (42 ±2 % of TOA incident irradiance) and 65 ±3 Wm¯² (23 ±1 %). The fractional atmospheric absorption of 23% represents a robust estimate largely unaffected by variations in latitude and season, thus, making it a potentially useful quantity for first order validation of regional climate models. These estimates are based on quality assessed surface data. First of all, we examine the temporal homogeneity of the monthly GEBA time series beyond 2000 and find the vast majority to be suitable for our purposes. The spatial representativeness of the GEBA and BSRN sites for their collocated 1° CERES EBAF grid cells we assess by using a satellite-derived surface solar radiation product (CM SAF) at 0.03° spatial resolution. We find representation errors of on average 3 Wm¯² or 2% (normalized by point values). Care is taken to identify and quantify uncertainties, which arise mostly from the measurements themselves, in particular surface albedo and ground-based solar radiation data. Other sources of uncertainty, like the spatial coverage by surface sites, the multiplicative combination of spatially averaged surface solar radiation and surface albedo, and the spatial representativeness of the point observations, are either negligibly small or can be corrected for.
NASA Astrophysics Data System (ADS)
Askari, H.; Esmailzadeh, E.; Barari, A.
2015-09-01
A novel procedure for the nonlinear vibration analysis of curved beam is presented. The Non-Uniform Rational B-Spline (NURBS) is combined with the Euler-Bernoulli beam theory to define the curvature of the structure. The governing equation of motion and the general frequency formula, using the NURBS variables, is applicable for any type of curvatures, is developed. The Galerkin procedure is implemented to obtain the nonlinear ordinary differential equation of curved system and the multiple time scales method is utilized to find the corresponding frequency responses. As a case study, the nonlinear vibration of carbon nanotubes with different shapes of curvature is investigated. The effect of oscillation amplitude and the waviness on the natural frequency of the curved nanotube is evaluated and the primary resonance case of system with respect to the variations of different parameters is discussed. For the sake of comparison of the results obtained with those from the molecular dynamic simulation, the natural frequencies evaluated from the proposed approach are compared with those reported in literature for few types of carbon nanotube simulation.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) ?=?0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. PMID:23034770
Keith, Scott W.; Allison, David B.
2014-01-01
This paper details the design, evaluation, and implementation of a framework for detecting and modeling nonlinearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and nonparametric bootstrapping. Unlike other nonlinear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI; kg/m2) and the complex multi-stage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic nonlinear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and nonlinearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz’s Bayesian Information Criterion (BIC) and Akaike’s Information Criterion (AIC), in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions. PMID:25610831
A method to correct coordinate distortion in EBSD maps
Zhang, Y.B. Elbrønd, A.; Lin, F.X.
2014-10-15
Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. - Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction.
on Sustainable Manufacturing for Emerging Technologies Sponsored by the ASME Manufacturing Engineering Division's Sustainable Production Automation Technical Committee 2013 ASME Manufacturing Science and Engineering* The conference is collocated with the 41st North American Manufacturing Research Conference
Random regression analyses using B-spline functions to model growth of Nellore cattle.
Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G
2012-02-01
The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight. PMID:22436178
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1975-01-01
Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.
Implementation of B-splines in a Conventional Finite Element Framework
Owens, Brian C.
2010-01-16
also be expressed as @ui @qfi = 2 66 66 4 N1 0 0 N2 0 0 N3 0 0 Nn 0 0 0 N1 0 0 N2 0 0 N3 0 ::: 0 Nn 0 0 0 N1 0 0 N2 0 0 N3 0 0 Nn 3 77 77 5 (2.21) 16 Therefore the load vector (Ffi) may be expressed as Ffi = Z ? fi@ui@q fi d?+ Z ? Ti@ui@q fi d? (2... . . . . . . . . . . . . . . . . . . . 32 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 32 2. Knot Vector . . . . . . . . . . . . . . . . . . . . . . . 34 3. Basis Functions . . . . . . . . . . . . . . . . . . . . . 35 4. Non-recursive Calculation of B-spline Basis...
Deng, Shirong; Liu, Li; Zhao, Xingqiu
2015-09-01
This article discusses the statistical analysis of panel count data when the underlying recurrent event process and observation process may be correlated. For the recurrent event process, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates. For inference on the model parameters, a monotone spline-based least squares estimation approach is developed, and the resulting estimators are consistent and asymptotically normal. In particular, our new approach does not rely on the model specification of the observation process. The proposed inference procedure performs well through simulation studies, and it is illustrated by the analysis of bladder tumor data. PMID:26095984
NASA Astrophysics Data System (ADS)
Ferreira, A. J. M.; Carrera, E.; Cinefra, M.; Roque, C. M. C.
2011-07-01
In this paper, the static and free vibration analysis of laminated shells is performed by radial basis functions (RBFs) collocation, according to a layerwise deformation theory (LW). The present LW theory accounts for through-the-thickness deformation, by considering an Mindlin-like evolution of all displacements in each layer. The equations of motion and the boundary conditions are obtained by Carrera's unified formulation, and further interpolated by collocation with RBFs.
Bantis, Leonidas E; Tsimikas, John V; Georgiou, Stelios D
2013-09-01
The use of ROC curves in evaluating a continuous or ordinal biomarker for the discrimination of two populations is commonplace. However, in many settings, marker measurements above or below a certain value cannot be obtained. In this paper, we study the construction of a smooth ROC curve (or surface in the case of three populations) when there is a lower or upper limit of detection. We propose the use of spline models that incorporate monotonicity constraints for the cumulative hazard function of the marker distribution. The proposed technique is computationally stable and simulation results showed a satisfactory performance. Other observed covariates can be also accommodated by this spline-based approach. PMID:23553499
Waterworth, John A
2002-01-01
The current experiment was carried out to extend our knowledge about the relative importance of stereoscopic display and hand-image collocation for dextrous interaction. We devised a new task, the Volumetric Dexterity Test (VDT), which quite accurately duplicates the way professional personnel such as surgeons and radiologists interact with detailed medical data in a VR environment. Our results were surprising. Stereo vision was very important to both accuracy and speed of task completion, as we found previously. But the presence of hand-image collocation did not improve accuracy, despite the fact that this was a truly three-dimensional task. If this finding is borne out it has important implications for the volumetric presentation of medical data to individual practitioners and in group settings. PMID:15458152
NASA Astrophysics Data System (ADS)
Zyta Hakuba, Maria; Folini, Doris; Wild, Martin; Sanchez-Lorenzo, Arturo
2013-04-01
Anthropogenic climate change is physically speaking a perturbation of the atmospheric energy budget through the insertion of constituents such as greenhouse gases or aerosols. Changes in the atmospheric energy budget largely affect the global climate and hydrological cycle, but the quantification of the different energy balance components is still afflicted with large uncertainties. The overall aim of the present study is the assessment of the mean state and the spatio-temporal variations in the solar energy disposition, in which we focus on obtaining an accurate partitioning of absorbed solar radiation between the surface and the atmosphere. Surface based measurements of solar radiation (GEBA, BSRN) are combined with collocated satellite-retrieved surface albedo (MODIS, CERES FSW, or CM SAF GAC-SAL) and top-of-atmosphere net incoming solar radiation (CERES EBAF) to quantify the absorbed solar radiation (ASR) at the surface and within the atmosphere over Europe for the period 2001-2005. In a first step, we examine the quality and temporal homogeneity of the monthly time series beyond 2000 provided by GEBA in order to identify a subset of sufficient quality. We find the vast majority of monthly time series to be suitable for our purposes. Using the satellite-derived CM SAF surface solar radiation product at 0.03° spatial resolution, we assess the spatial representativeness of the GEBA and BSRN sites for their collocated 1° grid cells as we intend to combine the point measurements with the coarser resolved CERES EBAF products (1° resolution), and we find spatial sampling errors of on average 3 Wm-2 or 2% (normalized by point values). Based on the combination of 134 GEBA surface solar radiation (SSR) time series with MODIS white-sky albedo and CERES EBAF top-of-atmosphere net radiation (TOAnet), we obtain a European mean partitioning (2001-2005) of absorbed solar radiation (relative to total incoming radiation) of: ASRsurf= 41% and ASRatm= 25%, together equaling TOAnet=66%. Based on 4 BSRN sites in combination with CERES FSW surface albedo and CERES EBAF TOAnet radiation, the partitioning is: ASRsurf=42% and ASRatm= 24%, equaling TOAnet=66%. During 2001-2005, we find a significant brightening in SSR and ASRsurf over Europe (GEBA) of around 1.3 Wm-2yr-1, and a decrease in ASRatm of -1 Wm-2yr-1. The mean-state of the energy balance components is thus largely determined by the period of consideration and varies by up to 5 Wm-2 from year to year. We apply the spatial interpolation technique kriging to the annual mean SSR (based on CM SAF) within 1° grid cells as collocated to the 134 GEBA sites and find the gridded data to be in very good agreement with the original satellite-derived SSR (aggregated onto 1° grid). This result suggests good spatial coverage of the GEBA data, and the possibility of generating a gridded data set based on the in-situ measurements. The possibility of expanding these analyses to the global scale and the application of the obtained data for the validation of global and regional climate models are discussed as well.
An Analysis of Peak Wind Speed Data from Collocated Mechanical and Ultrasonic Anemometers
NASA Technical Reports Server (NTRS)
Short, David A.; Wells, Leonard; Merceret, Francis J.; Roeder, William P.
2007-01-01
This study compared peak wind speeds reported by mechanical and ultrasonic anemometers at Cape Canaveral Air Force Station and Kennedy Space Center (CCAFS/KSC) on the east central coast of Florida and Vandenberg Air Force Base (VAFB) on the central coast of California. Launch Weather Officers, forecasters, and Range Safety analysts need to understand the performance of wind sensors at CCAFS/KSC and VAFB for weather warnings, watches, advisories, special ground processing operations, launch pad exposure forecasts, user Launch Commit Criteria (LCC) forecasts and evaluations, and toxic dispersion support. The legacy CCAFS/KSC and VAFB weather tower wind instruments are being changed from propeller-and-vane (CCAFS/KSC) and cup-and-vane (VAFB) sensors to ultrasonic sensors under the Range Standardization and Automation (RSA) program. Mechanical and ultrasonic wind measuring techniques are known to cause differences in the statistics of peak wind speed as shown in previous studies. The 45th Weather Squadron (45 WS) and the 30th Weather Squadron (30 WS) requested the Applied Meteorology Unit (AMU) to compare data between the RSA ultrasonic and legacy mechanical sensors to determine if there are significant differences. Note that the instruments were sited outdoors under naturally varying conditions and that this comparison was not designed to verify either technology. Approximately 3 weeks of mechanical and ultrasonic wind data from each range from May and June 2005 were used in this study. The CCAFS/KSC data spanned the full diurnal cycle, while the VAFB data were confined to 1000-1600 local time. The sample of 1-minute data from numerous levels on five different towers on each range totaled more than 500,000 minutes of data (482,979 minutes of data after quality control). The ten towers were instrumented at several levels, ranging from 12 ft to 492 ft above ground level. The ultrasonic sensors were collocated at the same vertical levels as the mechanical sensors and typically within 15 ft horizontally of each another. Data from a total of 53 RSA ultrasonic sensors, collocated with mechanical sensors were compared. The 1- minute average wind speed/direction and the 1-second peak wind speed/direction were compared.
Multispectral cloud-clearing using IASI sounding and collocated AVHRR imager measurements
NASA Astrophysics Data System (ADS)
Maddy, E. S.; King, T. S.; Sun, H.; Wolf, W.; Barnet, C.; Heidinger, A. K.; Cheng, Z.; Gambacorta, A.
2010-12-01
E. S. Maddy2, T. S. King2, H. Sun2, W. W. Wolf1, C. D. Barnet1, A. Heidinger1,Z. Cheng2, and A. Gambacorta2 1NOAA/NESDIS/Center for Satellite Applications and Research, Camp Springs, Maryland, USA 2Dell, Fairfax, Virginia, USA There are several approaches for handling the effect of clouds in the IR, the most common of which include: avoiding the clouds by screening for clear-sky footprints; directly modeling the radiative effect of the clouds using sophisticated radiative transfer and cloud microphysical models; and, estimating the clear-sky portion of an IR scene by using a number of adjacent and variably cloudy footprints coupled with an estimate of the clear-sky radiance from a forecast model or collocated satellite instrument less likely to be affected by clouds. The last approach, termed cloud-clearing, is currently used at NOAA/NESDIS for operational IASI processing. NOAA currently operationally processes 100% of IASI data from calibrated and apodized L1C spectral measurements to geophysical L2 products and distributes these products to the NOAA/Comprehensive Large Array-data Stewardship System (CLASS) (available at http://class.ngdc.noaa.gov). The current algorithm used to produce the L2 products from IASI is largely based on the AIRS science team (AST) algorithm including the fast Radiative Transfer Algorithm (RTA), fast eigenvector regression, as well as cloud-clearing and physical retrieval methodologies which rely on microwave measurements from collocated AMSU to handle the effects of clouds in the IR. We will describe future upgrades to the operational cloud-clearing algorithm being used for IASI processing within NOAA/NESDIS. Specifically, our new cloud-clearing algorithm leverages off of the MetOp-A AVHRR Clouds from AVHRR (CLAVR-x) cloud mask to provide high quality, high spatial resolution InfraRed (IR) window clear-sky scene radiance estimates required for cloud-clearing inputs and quality assurance. The direct use of AVHRR clear-sky measurements decreases limitations of the current algorithm to provide high quality clear-sky radiance estimates throughout the atmospheric column, and especially near the surface to a high degree of accuracy. In turn, this enables the IASI sounder to provide high quality and high vertical and spatial resolution soundings temperature and trace gases for the study of weather and climate processes.
NASA Astrophysics Data System (ADS)
Weber, F.; Distl, H.
2015-11-01
This paper derives an approximate collocated control solution for the mitigation of multi-mode cable vibration by semi-active damping with negative stiffness based on the control force characteristics of clipped linear quadratic regulator (LQR). The control parameters are derived from optimal modal viscous damping and corrected in order to guarantee that both the equivalent viscous damping coefficient and the equivalent stiffness coefficient of the semi-active cable damper force are equal to their desired counterparts. The collocated control solution with corrected control parameters is numerically validated by free decay tests of the first four cable modes and combinations of these modes. The results of the single-harmonic tests demonstrate that the novel approach yields 1.86 times more cable damping than optimal modal viscous damping and 1.87 to 2.33 times more damping compared to a passive oil damper whose viscous damper coefficient is optimally tuned to the targeted mode range of the first four modes. The improvement in case of the multi-harmonic vibration tests, i.e. when modes 1 and 3 and modes 2 and 4 are vibrating at the same time, is between 1.55 and 3.81. The results also show that these improvements are obtained almost independent of the cable anti-node amplitude. Thus, the proposed approximate real-time applicable collocated semi-active control solution which can be realized by magnetorheological dampers represents a promising tool for the efficient mitigation of stay cable vibrations.
Centers, P.W.
1995-05-01
Dependent upon molecular weight and distribution, concentration, temperature, air flow, and test details or field application, polydimethylsiloxane (PDMS) may be neutral, profoamant or antifoamant in polyolesters. This understanding was critical in the solution of a turbine engine lubrication system foaming problem occurring at several military locations. Suspect turbine engine-accessory gearbox assembly materials gathered from several sites were evaluated. One non-specification PDMS-based spline lubricant caused copious foaming of the lubricant at less than ten parts-per-million concentration, while a specification polymethyl-phenylsiloxane (PMPS)-based lubricant required a concentration nearly 2000 times greater to generate equivalent foam. Use of the profoamant PDMS spline lubricant was then prohibited. Since prohibition, foaming of turbine engine lubricants used in the particular application has not been reported. PMPS impact on foaming of ester lubricants is similar to a much more viscous PDMS attributed to the reduced interaction of PMPS in esters due to pendant phenyl structure of PMPS absent in PDMS. These data provide significant additional insight and methodology to investigate foaming tendencies of partially miscible silicone-ester and other fluid systems. 7 refs., 2 figs., 1 tab.
NASA Astrophysics Data System (ADS)
Marghany, Maged
2014-06-01
A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.
Nakashima, Eiji
2015-07-01
Using the all solid cancer mortality data set of the Life Span Study (LSS) cohort from 1950 to 2003 (LSS Report 14) data among atomic bomb survivors, excess relative risk (ERR) statistical analyses were performed using the second degree polynomial and the threshold and restricted cubic spline (RCS) dose response models. For the RCS models with 3 to 7 knots of equally spaced percentiles with margins in the dose range greater than 50 mGy, the dose response was assumed to be linear at less than 70 to 90 mGy. Due to the skewed dose distribution of atomic bomb survivors, the current knot system for the RCS analysis results in a detailed depiction of the dose response as less than approximately 0.5 Gy. The 6 knot RCS models for the all-solid cancer mortality dose response of the whole dose or less than 2 Gy were selected with the AIC model selection criterion and fit significantly better (p < 0.05) than the linear (L) model. The usual RCS includes the L-global model but not the quadratic (Q) nor linear-quadratic (LQ) global models. The authors extended the RCS to include L or LQ global models by putting L or LQ constraints on the cubic spline in the lower and upper tails, and the best RCS model selected with AIC criterion was the usual RCS with L-constraints in both the lower and upper tails. The selected RCS had a linear dose-response model in the lower dose range (i.e., < 0.2-0.3 Gy) and was compatible with the linear no-threshold (LNT) model in this dose range. The proposed method is also useful in describing the dose response of a specific cancer or non-cancer disease incidence/mortality. PMID:26011495
Abdoun, Oussama; Joucla, Sébastien; Mazzocco, Claire; Yvert, Blaise
2010-01-01
A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA) technology. Indeed, high-density MEAs provide large-scale coverage (several square millimeters) of whole neural structures combined with microscopic resolution (about 50??m) of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid-deformation-based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License and available at http://sites.google.com/site/neuromapsoftware. PMID:21344013
NASA Technical Reports Server (NTRS)
Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Holben, Brent N.; Welton, Ellsworth J.; Smirnov, Alexander; Jeong, Myeong-Jae; Hansell, Richard A.; Berkoff, Timothy A.
2012-01-01
Cirrus clouds, particularly sub visual high thin cirrus with low optical thickness, are difficult to be screened in operational aerosol retrieval algorithms. Collocated aerosol and cirrus observations from ground measurements, such as the Aerosol Robotic Network (AERONET) and the Micro-Pulse Lidar Network (MPLNET), provide us with an unprecedented opportunity to examine the susceptibility of operational aerosol products to thin cirrus contamination. Quality assured aerosol optical thickness (AOT) measurements were also tested against the CALIPSO vertical feature mask (VFM) and the MODIS-derived thin cirrus screening parameters for the purpose of evaluating thin cirrus contamination. Key results of this study include: (1) Quantitative evaluations of data uncertainties in AERONET AOT retrievals are conducted. Although AERONET cirrus screening schemes are successful in removing most cirrus contamination, strong residuals displaying strong spatial and seasonal variability still exist, particularly over thin cirrus prevalent regions during cirrus peak seasons, (2) Challenges in matching up different data for analysis are highlighted and corresponding solutions proposed, and (3) Estimation of the relative contributions from cirrus contamination to aerosol retrievals are discussed. The results are valuable for better understanding and further improving ground aerosol measurements that are critical for aerosol-related climate research.
A domain adaptive stochastic collocation approach for analysis of MEMS under uncertainties
Agarwal, Nitin; Aluru, N.R.
2009-11-01
This work proposes a domain adaptive stochastic collocation approach for uncertainty quantification, suitable for effective handling of discontinuities or sharp variations in the random domain. The basic idea of the proposed methodology is to adaptively decompose the random domain into subdomains. Within each subdomain, a sparse grid interpolant is constructed using the classical Smolyak construction [S. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, Soviet Math. Dokl. 4 (1963) 240-243], to approximate the stochastic solution locally. The adaptive strategy is governed by the hierarchical surpluses, which are computed as part of the interpolation procedure. These hierarchical surpluses then serve as an error indicator for each subdomain, and lead to subdivision whenever it becomes greater than a threshold value. The hierarchical surpluses also provide information about the more important dimensions, and accordingly the random elements can be split along those dimensions. The proposed adaptive approach is employed to quantify the effect of uncertainty in input parameters on the performance of micro-electromechanical systems (MEMS). Specifically, we study the effect of uncertain material properties and geometrical parameters on the pull-in behavior and actuation properties of a MEMS switch. Using the adaptive approach, we resolve the pull-in instability in MEMS switches. The results from the proposed approach are verified using Monte Carlo simulations and it is demonstrated that it computes the required statistics effectively.
NASA Astrophysics Data System (ADS)
Li, S. P.; Chen, G.; Li, J. W.
2015-11-01
By fitting the observed velocity field of the Tianshan Mountains from 1992 to 2006 with least-squares collocation, we established a velocity field model in this region. The velocity field model reflects the crustal deformation characteristics of the Tianshan reasonably well. From the Tarim Basin to the Junggar Basin and Kazakh platform, the crustal deformation decreases gradually. Divided at 82° E, the convergence rates in the west are obviously higher than those in the east. We also calculated the parameter values for crustal strain in the Tianshan Mountains. The results for maximum shear strain exhibited a concentration of significantly high values at Wuqia and its western regions, and the values reached a maxima of 4.4×10-8 a-1. According to isogram distributions for the surface expansion rate, we found evidence that the Tianshan Mountains have been suffering from strong lateral extrusion by the basin on both sides. Combining this analysis with existing results for focal mechanism solutions from 1976 to 2014, we conclude that it should be easy for a concentration of earthquake events to occur in regions where maximum shear strains accumulate or mutate. For the Tianshan Mountains, the possibility of strong earthquakes in Wuqia-Jiashi and Lake Issyk-Kul will persist over the long term.
Sun, Weiwei; Huang, Weizhang; Russell, Robert D.
1996-12-01
5 3.144 32 0.326E6 3.156 8 0.753E7 3.743 i6 0.164E10 3.816 32 0.367E12 3.854 0.239E3 0.883E3 0.335E4 01743E4 0.903E1 0.i 16E6 0.182E2 0.186E7 0.364E2 013 l’0E8 0.205E4’ 0.772E10 0.188E5 0.171E13 0.162E6 K1 (T1) h (or= 1) 2.196 1.0 1.0 2.196 1.0 1....5) gives (3.6) AcoIBlto f. D ow nl oa de d 09 /1 5/ 14 to 1 29 .2 37 .4 6. 10 0. R ed ist rib ut io n su bje ct to SIA M lic en se or co py rig ht; se e h ttp ://w ww .si am .or g/j ou rna ls/ ojs a.p hp PRECONDITIONING FOR COLLOCATION SYSTEMS...
The importance of temporal collocation for the evaluation of aerosol models with observations
NASA Astrophysics Data System (ADS)
Schutgens, N. A. J.; Partridge, D. G.; Stier, P.
2015-09-01
It is often implicitly assumed that over suitably long periods the mean of observations and models should be comparable, even if they have different temporal sampling. We assess the errors incurred due to ignoring temporal sampling and show they are of similar magnitude as (but smaller than) actual model errors (20-60 %). Using temporal sampling from remote sensing datasets (the satellite imager MODIS and the ground-based sun photometer network AERONET) and three different global aerosol models, we compare annual and monthly averages of full model data to sampled model data. Our results show that sampling errors as large as 100 % in AOT (Aerosol Optical Thickness), 0.4 in AE (Ångström Exponent) and 0.05 in SSA (Single Scattering Albedo) are possible. Even in daily averages, sampling errors can be significant. More-over these sampling errors are often correlated over long distances giving rise to artificial contrasts between pristine and polluted events and regions. Additionally, we provide evidence that suggests that models will underestimate these errors. To prevent sampling errors, model data should be temporally collocated to the observations before any analysis is made. We also discuss how this work has consequences for in-situ measurements (e.g. aircraft campaigns or surface measurements) in model evaluation.
A spline-based non-linear diffeomorphism for multimodal prostate registration Jhimli Mitra a,b,
Kato, Zoltan
of established correspondences should be included in the system of equations. The registration accuraciesA spline-based non-linear diffeomorphism for multimodal prostate registration Jhimli Mitra a Carrer Joan Maragall, 17002 Girona, Spain e Hospital Dr. Josep Trueta, Av. França, s/n, 17007 Girona
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective was to estimate genetic parameters for individual test-day milk, fat, and protein yields with a cubic spline model. A total of 196,687 test-day records in the first 305-d of 38,172 first lactation Holstein cows that calved between 1994 and early 1999 were obtained from Dairy Records Ma...
Martin, Fernando; Horner, Daniel A.; Vanroose, Wim; Rescigno,Thomas N.; McCurdy, C. William
2005-11-04
We report a fully ab initio implementation of exterior complex scaling in B-splines to evaluate total, singly and triply differential cross sections in double photoionization problems. Results for He and H{sub 2} double photoionization are presented and compared with experiment.
Baltazar, J. C.; Claridge, D. E.
2002-01-01
The paper presents seventeen approaches that use cubic splines and Fourier series for restoring short term missing data in time series of building energy use and weather data. The study is based on twenty samples of hourly data, each at least one...
Baltazar-Cervantes, J. C.; Claridge, D. E.
2002-01-01
This paper presents seventeen approaches that use cubic splines and Fourier series for restoring short term missing data in time series of building energy use and weather data. The study is based on twenty samples of hourly data, each at least one...
Chen, Sheng
The system identification and control of Hammerstein system using non-uniform rational B-spline neural network and particle swarm optimization$ Xia Hong a,n , Sheng Chen b,c a School of Systems Engineering, University of Reading, Reading RG6 6AY, UK b School of Electronics and Computer Science
Technology Transfer Automated Retrieval System (TEKTRAN)
Genetic parameters were estimated with REML for individual test-day milk, fat, and protein yields and SCS with a random regression cubic spline model. Test-day records of Holstein cows that calved from 1994 through early 1999 were obtained from Dairy Records Management Systems in Raleigh, North Car...
Optimal Trajectory Generation for a Glider in Time-Varying 2D Ocean Flows B-spline Model
Ober-BlÃ¶baum, Sina
Optimal Trajectory Generation for a Glider in Time-Varying 2D Ocean Flows B-spline Model Weizhong that Lagrangian Coher- ent Structures (LCS) are useful in determining near optimal trajectories for autonomous Generation (NTG) algorithm to find the optimal trajectory of the glider. The trajectories found using the 2D
NASA Astrophysics Data System (ADS)
Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza
2014-10-01
The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the multivariate adaptive regression splines (MARS) technique, conclusions of this research work are exposed.
Generalized splines for Radon transform on compact Lie groups with applications to crystallography
Swanhild Bernstein; Svend Ebert; Isaac Z. Pesenson
2012-04-27
The Radon transform Rf of functions f on SO(3) has recently been applied extensively in texture analysis, i.e. the analysis of preferred crystallographic orientation. In practice one has to determine the orientation probability density function f \\in L2(SO(3)) from Rf \\in L2(S2\\times S2) which is known only on a discrete set of points. Since one has only partial information about Rf the inversion of the Radon transform becomes an ill-posed inverse problem. Motivated by this problem we define a new notion of the Radon transform Rf of functions f on general compact Lie groups and introduce two approximate inversion algorithms which utilize our previously developed generalized variational splines on manifolds. Our new algorithms fit very well to the application of Radon transform on SO(3) to texture analysis.
An Efficient Operator for the Change Point Estimation in Partial Spline Model
Han, Sung Won; Zhong, Hua; Putt, Mary
2015-01-01
In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy. PMID:25705072
Ionospheric scintillation modeling for high- and mid-latitude using B-spline technique
NASA Astrophysics Data System (ADS)
Priyadarshi, S.
2015-09-01
Ionospheric scintillation is a significant component of space-weather studies and serves as an estimate for the level of perturbation in the satellite radio wave signal caused due to small-scale ionospheric irregularities. B-spline functions are used on the GPS ground based data collected during the year 2007-2012 for modeling high- and mid-latitude ionospheric scintillation. Proposed model is for Hornsund, Svalbard and Warsaw, Poland. The input data used in this model were recorded by GSV 4004b receivers. For validation, results of this model are compared with the observation and other existing models. Physical behavior of the ionospheric scintillation during different seasons and geomagnetic conditions are discussed well. Model is found in good coherence with the ionospheric scintillation theory as well as to the accepted scintillation mechanism for high- and mid-latitude.
Baltazar-Cervantes, Juan-Carlos
2000-01-01
To define a single technique that can be reliably used for evaluation of missing data in time series of weather and building energy use data is a complex task. This thesis evaluates the application of spline and Fourier series mathematical...
Birchler, W.D.; Schilling, S.A.
2001-02-01
The purpose of this report is to demonstrate that modern computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems can be used in the Department of Energy (DOE) Nuclear Weapons Complex (NWC) to design new and remodel old products, fabricate old and new parts, and reproduce legacy data within the inspection uncertainty limits. In this study, two two-dimensional splines are compared with several modern CAD curve-fitting modeling algorithms. The first curve-fitting algorithm is called the Wilson-Fowler Spline (WFS), and the second is called a parametric cubic spline (PCS). Modern CAD systems usually utilize either parametric cubic and/or B-splines.
NASA Astrophysics Data System (ADS)
Dvorska, Alice; Milan, Vá?a; Vlastimil, Hanuš; Marian, Pavelka
2013-04-01
The collocated station Košetice - K?ešín u Pacova, central Czech Republic, is a major research and monitoring infrastructure in the Czech Republic and central Europe. It consists of two basic components: the observatory Košetice run since 1988 by the Czech Hydrometeorological Institute and the atmospheric station (AS) K?ešín u Pacova starting operation in 2013. The AS is built and run by CzechGlobe - Global Change Research Centre, Academy of Sciences of the Czech Republic and is situated 100 m far from the observatory. There are three research and monitoring activities at the collocated station providing data necessary for the research on climate and related changes. The AS K?ešín u Pacova consists of a 250 m tall tower serving for ground-based and vertical gradient measurements of (i) concentrations of CO2, CO, CH4, total gaseous mercury and tropospheric ozone (continuously), (ii) elemental and organic carbon (semicontinuously), (iii) carbon and oxygen isotopes, radon, N2O, SF6 and other species (episodically), (iv) optical properties of atmospheric aerosols and (v) meteorological parameters and the boundary layer height. Further, eddy covariance measurements in the nearby agroecosystem provide data on CO2 and H2O fluxes between the atmosphere and the ecosystem. Finally, monitoring activities at the nearby small hydrological catchment Anenské povodí run under the GEOMON network enables studying local hydrological and biogeochemical cycles. These measurements are supported by the long-term monitoring of meteorological and air quality parameters at the observatory Košetice, that are representative for the central European background. The collocated station provides a big research opportunity and challenge due to (i) a broad spectra of monitored chemical species, meteorological, hydrological and other parameters, (ii) measurements in various environmental compartments and especially the atmosphere, (iii) provision of data suitable for conducting multidisciplinar research activities and (iv) participation in a number of international programmes and projects, i.e. ICOS (AS K?ešín u Pacova), ACTRIS, ACCENT, CLRTAP/EMEP, GAW and ICP-IM (Košetice) and others. Finally, the collocated station has potential for a successful participation in the planned network of European superstations covering both climate and air quality issues, one of the key areas in the European Strategy Forum on Research Infrastructures (ESFRI) process. Acknowledgement: This work is supported by the CzechGlobe (CZ.1.05/1.1.00/02.0073) and CZ.1.07/2.4.00/31.0056 projects.
NASA Astrophysics Data System (ADS)
Liu, Yutong; Uberti, Mariano; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael D.
2009-02-01
Coregistration of in vivo magnetic resonance imaging (MRI) with histology provides validation of disease biomarker and pathobiology studies. Although thin-plate splines are widely used in such image registration, point landmark selection is error prone and often time-consuming. We present a technique to optimize landmark selection for thin-plate splines and demonstrate its usefulness in warping rodent brain MRI to histological sections. In this technique, contours are drawn on the corresponding MRI slices and images of histological sections. The landmarks are extracted from the contours by equal spacing then optimized by minimizing a cost function consisting of the landmark displacement and contour curvature. The technique was validated using simulation data and brain MRI-histology coregistration in a murine model of HIV-1 encephalitis. Registration error was quantified by calculating target registration error (TRE). The TRE of approximately 8 pixels for 20-80 landmarks without optimization was stable at different landmark numbers. The optimized results were more accurate at low landmark numbers (TRE of approximately 2 pixels for 50 landmarks), while the accuracy decreased (TRE approximately 8 pixels for larger numbers of landmarks (70- 80). The results demonstrated that registration accuracy decreases with the increasing landmark numbers offering more confidence in MRI-histology registration using thin-plate splines.
Application Of Prony's Method To Data On Viscoelasticity
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1988-01-01
Prony coefficients found by computer program, without trial and error. Computational method and computer program developed to exploit full potential of Prony's interpolation method in analysis of experimental data on relaxation modules of viscoelastic material. Prony interpolation curve chosen to give least-squares best fit to "B-spline" interpolation of experimental data.
Hernandez, Andrew M.; Boone, John M.
2014-01-01
Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using paired t-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R2) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Conclusions: Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB). PMID:24694149
NASA Astrophysics Data System (ADS)
Han, In Su; Kim, Eung Sik; Min, Suk Won; Hur, Don; Park, Jong Keun
2004-10-01
In this paper, the electric field at the spacer in a 170 kV gas-insulated switchgear (GIS) is optimized. Initially, the tangential and total electric fields around the original shape of the 170 kV GIS produced by a Korean company are calculated using a combination of the charge simulation method (CSM) and surface charge method (SCM). The contour of the spacer in the 170 kV GIS is found using a non-uniform rational B-spline (NURB) curve the effectiveness of which has been proved. By moving some control points in the NURB curve, the initial shape of the 170 kV GIS can be determined so that we may begin to optimize the electric field. Owing to the proposed algorithm, the overall process has a stable convergence. The objects that we want to design are the upper and lower parts of the spacer. Finally, we can find the shapes in which the tangential and total electric fields are optimized.
NASA Astrophysics Data System (ADS)
Martinez, Leslie A.; Narea, Freddy J.; Cedeño, Fernando; Muñoz, Aaron A.; Reigosa, Aldo; Bravo, Kelly
2013-11-01
The noninvasive optical techniques have attracted considerable interest in recent years, because these techniques provide lot of information on the structure and composition of biological tissues more quickly and painlessly, in this study classifies the degrees of histological differentiation of neoplastic tissue of the breast in white adipose tissue samples through numerical pametrización of the diffuse reflection spectra using the Fourier series approximation. The white adipose tissue is irradiated with the spectrophotometer MiniScan XEplus and it from a mastectomy of patients with aged 38 and 50 who have a cancer lesion in the breast. The samples were provided by the pathologist with theirs medical report, it which we indicate the histological grade of tumor. We performed a parameterization algorithm where the classification criterion is the modulus of the minimum difference between the numerical approximation coefficients ai and average numerical approximation coefficients obtained for each histological grade ¯ al. Is confirmed that the cubic spline interpolation this low-power computing lets classified into histological grades with 91% certainty the tissues under study from |ai - ¯ al|
N-dimensional non uniform rational B-splines for metamodeling
Turner, Cameron J; Crawford, Richard H
2008-01-01
Non Uniform Rational B-splines (NURBs) have unique properties that make them attractive for engineering metamodeling applications. NURBs are known to accurately model many different continuous curve and surface topologies in 1-and 2-variate spaces. However, engineering metamodels of the design space often require hypervariate representations of multidimensional outputs. In essence, design space metamodels are hyperdimensional constructs with a dimensionality determined by their input and output variables. To use NURBs as the basis for a metamodel in a hyperdimensional space, traditional geometric fitting techniques must be adapted to hypervariate and hyperdimensional spaces composed of both continuous and discontinuous variable types. In this paper, we describe the necessary adaptations for the development of a NURBs-based metamodel called a Hyperdimensional Performance Model or HyPerModel. HyPerModels are capable of accurately and reliably modeling nonlinear hyperdimensional objects defined by both continuous and discontinuous variables of a wide variety of topologies, such as those that define typical engineering design spaces. We demonstrate this ability by successfully generating accurate HyPerModels of 10 trial functions laying the foundation for future work with N-dimensional NURBs in design space applications.
n-dimensional non uniform rational b-splines for metamodeling
Turner, Cameron J; Crawford, Richard H
2008-01-01
Non Uniform Rational B-splines (NURBs) have unique properties that make them attractive for engineering metamodeling applications. NURBs are known to accurately model many different continuous curve and surface topologies in 1- and 2-variate spaces. However, engineering metamodels of the design space often require hypervariate representations of multidimensional outputs. In essence, design space metamodels are hyperdimensional constructs with a dimensionality determined by their input and output variables. To use NURBs as the basis for a metamodel in a hyperdimensional space, traditional geometric fitting techniques must be adapted to hypervariate and hyperdimensional spaces composed of both continuous and discontinuous variable types. In this paper, they describe the necessary adaptations for the development of a NURBs-based metamodel called a Hyperdimensional Performance Model or HyPerModel. HyPerModels are capable of accurately and reliably modeling nonlinear hyperdimensional objects defined by both continuous and discontinuous variables of a wide variety of topologies, such as those that define typical engineering design spaces. They demonstrate this ability by successfully generating accurate HyPerModels of 10 trial functions laying the foundation for future work with N-dimensional NURBs in design space applications.
Vijayaraghavan, Pramila; Veezhinathan, Mahesh
2015-01-01
Spirometry evaluates the integrated function of lung capacity and chest wall mechanics measuring the total volume of air forcefully exhaled from a fully inflated lung. This non-invasive, informative technique for characterizing pulmonary function has an important role in clinical trials to narrow the differential diagnosis of subjects being assessed for pulmonary disorders. The test however requires patient co-operation and sometimes sub maximal effort affects the results potentially thereby leading to incomplete test and misdiagnosis. The aim of this work is to develop a prediction model based on Multivariate adaptive regression splines (MARS) technique to estimate the spirometric parameter Peak Expiratory Flow (PEF) volume. In the present study, flow-volume data from N = 220 subjects are considered. Model performances are evaluated statistically with coefficient of determination (R2) and Root Mean Squared Error (RMSE). The significant spirometric features captured in the model were FEV1, FEF50, FEF25 and the demographic parameter weight. Bland-Altman plots for the estimated PEF values showed a minimal bias. The MARS model successfully adopted the important features for prediction of PEF parameter with overall good fit and these findings can assist clinicians with enhanced spirometric investigations on respiratory disorders. PMID:26684569
Spline-based Study of the Extragalactic Background Light Spectrum using Gamma-Ray Observations
NASA Astrophysics Data System (ADS)
Bose, Anoushka; Rathmann-Bloch, Julia; Biteau, Jonathan; Williams, David A.
2016-01-01
The extragalactic background light (EBL) is made of all the light emitted by stars and galaxies throughout cosmic history. Expanding on the work of Biteau & Williams 2015, we develop a novel natural cubic spline model of the local EBL spectrum and constrain its parameters using the gamma-ray spectra of 38 blazars measured in the high-energy (HE, 0.1 to 100 GeV) and very-high-energy (VHE, 0.1 to 20 TeV) bands. Starting from this best-fit model, we then study the so-called "delta gamma" (??) observable, defined as the difference between the VHE and HE photon indices. This second study is focused on a subset of nine BL Lac objects. The application of a scaling factor to the cosmic optical background (0.1 - 10 nm) significantly impacts the predicted ?? as a function of redshift, whereas a similar modification of the cosmic infrared background (10 - 1000 nm) has no impact. We conclude that the simple delta gamma approach can only constrain part of the EBL spectrum, while a detailed study of the spectra, such as presented in the first part of this research, is needed to constrain the cosmic infrared background.
Nieto, P J García; Antón, J C Álvarez; Vilán, J A Vilán; García-Gonzalo, E
2015-05-01
The aim of this research work is to build a regression model of air quality by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (northern Spain) at a local scale. To accomplish the objective of this study, the experimental data set made up of nitrogen oxides (NO x ), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3), and dust (PM10) was collected over 3 years (2006-2008). The US National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of these numerical calculations, using the MARS technique, conclusions of this research work are exposed. PMID:25414030
A spline-based tool to assess and visualize the calibration of multiclass risk predictions.
Van Hoorde, K; Van Huffel, S; Timmerman, D; Bourne, T; Van Calster, B
2015-04-01
When validating risk models (or probabilistic classifiers), calibration is often overlooked. Calibration refers to the reliability of the predicted risks, i.e. whether the predicted risks correspond to observed probabilities. In medical applications this is important because treatment decisions often rely on the estimated risk of disease. The aim of this paper is to present generic tools to assess the calibration of multiclass risk models. We describe a calibration framework based on a vector spline multinomial logistic regression model. This framework can be used to generate calibration plots and calculate the estimated calibration index (ECI) to quantify lack of calibration. We illustrate these tools in relation to risk models used to characterize ovarian tumors. The outcome of the study is the surgical stage of the tumor when relevant and the final histological outcome, which is divided into five classes: benign, borderline malignant, stage I, stage II-IV, and secondary metastatic cancer. The 5909 patients included in the study are randomly split into equally large training and test sets. We developed and tested models using the following algorithms: logistic regression, support vector machines, k nearest neighbors, random forest, naive Bayes and nearest shrunken centroids. Multiclass calibration plots are interesting as an approach to visualizing the reliability of predicted risks. The ECI is a convenient tool for comparing models, but is less informative and interpretable than calibration plots. In our case study, logistic regression and random forest showed the highest degree of calibration, and the naive Bayes the lowest. PMID:25579635
Boundary Value Technique for Initial Value Problems Based on Adams-Type Second Derivative Methods
ERIC Educational Resources Information Center
Jator, S. N.; Sahi, R. K.
2010-01-01
In this article, we propose a family of second derivative Adams-type methods (SDAMs) of order up to 2k + 2 ("k" is the step number) for initial value problems. The methods are constructed through a continuous approximation of the SDAM which is obtained by multistep collocation. The continuous approximation is used to obtain initial value methods,…
Peirce, Anthony
A Hermite cubic collocation scheme for plane strain hydraulic fractures A. Peirce Department Accepted 13 February 2010 Available online 4 March 2010 Keywords: Hydraulic fractures Integro the propagation of a hydraulic fracture in a state of plane strain. Special blended cubic Hermite-powerlaw basis
ERIC Educational Resources Information Center
Peters, Elke
2014-01-01
This article examines how form recall of target lexical items by learners of English as a foreign language (EFL) is affected (1) by repetition (1, 3 or 5 number of occurrences), (2) by the type of target item (single words versus collocations), and (3) by the time of post-test administration (immediately or one week after the learning session).…
NASA Astrophysics Data System (ADS)
Deschenes, Sylvain; Godbout, Benoit; Branchaud, Dominic; Mitton, David; Pomero, Vincent; Bleau, Andre; Skalli, Wafa; de Guise, Jacques A.
2003-05-01
We propose a new fast stereoradiographic 3D reconstruction method for the spine. User input is limited to few points passing through the spine on two radiographs and two line segments representing the end plates of the limiting vertebrae. A 3D spline that hints the positions of the vertebrae in space is then generated. We then use wavelet multi-scale analysis (WMSA) to automatically localize specific features in both lateral and frontal radiographs. The WMSA gives an elegant spectral investigation that leads to gradient generation and edge extraction. Analysis of the information contained at several scales leads to the detection of 1) two curves enclosing the vertebral bodies' walls and 2) inter-vertebral spaces along the spine. From this data, we extract four points per vertebra per view, corresponding to the corners of the vertebral bodies. These points delimit a hexahedron in space where we can match the vertebral body. This hexahedron is then passed through a 3D statistical database built using local and global information generated from a bank of normal and scoliotic spines. Finally, models of the vertebrae are positioned with respect to these landmarks, completing the 3D reconstruction.
NASA Astrophysics Data System (ADS)
Hansell, Richard A.; Tsay, Si-Chee; Ji, Qiang; Liou, K. N.; Ou, Szu-Cheng
2003-09-01
An approach is presented to estimate the surface aerosol radiative forcing by use of collocated cloud-screened narrowband spectral and thermal-offset-corrected radiometric observations during the Puerto Rico Dust Experiment 2000, South African Fire Atmosphere Research Initiative (SAFARI) 2000, and Aerosol Characterization Experiment-Asia 2001. We show that aerosol optical depths from the Multiple-Filter Rotating Shadowband Radiometer data match closely with those from the Cimel sunphotometer data for two SAFARI-2000 dates. The observed aerosol radiative forcings were interpreted on the basis of results from the Fu-Liou radiative transfer model, and, in some cases, cross checked with satellite-derived forcing parameters. Values of the aerosol radiative forcing and forcing efficiency, which quantifies the sensitivity of the surface fluxes to the aerosol optical depth, were generated on the basis of a differential technique for all three campaigns, and their scientific significance is discussed.
A spectral element method for the simulation of unsteady incompressible flows with heat transfer
NASA Technical Reports Server (NTRS)
Karniadakis, George E.; Patera, Anthony T.
1986-01-01
The spectral element method is a high-order finite element technique for solution of the Navier-Stokes and energy equations. In the isoparametric spectral element discretization, the domain is broken up into general brick elements, and the dependent and independent variables represented as high-order tensor-product Lagrangian interpolants through Chebyshev collocation points. The nonlinear and convective terms in the governing equations are treated with explicit collocation, while the pressure and diffusive contributions are handled implicitly using variational projection operators. The method is applied to flow past a cylinder, flow in grooved channels, and natural convection in an enclosure.
Stat 7334 Nonparametric and Robust Methods Miscellaneous References
Serfling, Robert
, 1985 Eubanks, R. L., Spline Smoothing and Nonparametric Regression, Marcel Dekker, 1988 Fraser, D. A. S., Nonparametric Methods in Statistics, Wiley, 1957 Green, P. J. and Silverman, B. W., Nonparametric Regression and Generalized Linear Models, Chapman and Hall, 1994 H¨ardle, W., Applied Nonparametric Regression, Cambridge
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
Spectral methods in fluid dynamics
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Zang, T. A.
1986-01-01
Fundamental aspects of spectral methods are introduced. Recent developments in spectral methods are reviewed with an emphasis on collocation techniques. Their applications to both compressible and incompressible flows, to viscous as well as inviscid flows, and also to chemically reacting flows are surveyed. The key role that these methods play in the simulation of stability, transition, and turbulence is brought out. A perspective is provided on some of the obstacles that prohibit a wider use of these methods, and how these obstacles are being overcome.
Zatsarinny, O.; Bartschat, K.; Allan, M.
2011-03-15
In a joint experimental and theoretical effort, we carried out a detailed study of elastic electron scattering from Kr atoms. Absolute angle-differential cross sections for elastic electron scattering were measured over the energy range 0.3-9.8 eV with an energy width of about 13 meV at scattering angles between 0 deg. and 180 deg. Excellent agreement is obtained between our experimental data and predictions from a fully relativistic Dirac B-spline R-matrix (close-coupling) model that accounts for the atomic dipole polarizability through a specially designed pseudostate.
Time-Spectral Method for CFD Prediction of Helicopter Rotor Vibratory Loads
Alonso, Juan J.
high-speed transonic and highly-loaded dynamic stall conditions. Computations are performed using fullyTime-Spectral Method for CFD Prediction of Helicopter Rotor Vibratory Loads Seongim Choi1@mail.arc.nasa.gov Summary. A Fourier collocation based time-spectral method is used to simulate helicopter rotor flows under
The Bi-CGSTAB Method with Red-Black Gauss-Seidel Preconditioner Applied to the
Brill, Stephen H.
The Bi-CGSTAB Method with Red-Black Gauss-Seidel Preconditioner Applied to the Hermite Collocation Specialty Chemicals, Toms River, New Jersey. Email: joseph.guarnaccia@cibasc.com G. F. Pinder, Department. To minimizethis bur- den, we solve these equations using the Bi-CGSTAB method with a red- black Gauss
On the Hybrid Method with Three Off-Step Points for Initial Value Problems
ERIC Educational Resources Information Center
Jator, S. N.
2010-01-01
A continuous representation of a hybrid method with three "off-step" points is developed via interpolation and collocation procedures, and used to obtain initial value methods (IVMs) for solving initial value problems. The IVMs are assembled into a single block matrix equation which is convergent and A-stable. We note that accuracy is improved by…
Bueler, Ed
and Optimal Stable Immersion Levels Eric A. Butcher and Oleg A. Bobrenkov Department of Mechanical, and an in-depth investigation of the optimal stable immersion levels for down- milling in the vicinity
Park, Hyunjin; Park, Jun-Sung; Seong, Joon-Kyung; Na, Duk L; Lee, Jong-Min
2012-04-30
Analysis of cortical patterns requires accurate cortical surface registration. Many researchers map the cortical surface onto a unit sphere and perform registration of two images defined on the unit sphere. Here we have developed a novel registration framework for the cortical surface based on spherical thin-plate splines. Small-scale composition of spherical thin-plate splines was used as the geometric interpolant to avoid folding in the geometric transform. Using an automatic algorithm based on anisotropic skeletons, we extracted seven sulcal lines, which we then incorporated as landmark information. Mean curvature was chosen as an additional feature for matching between spherical maps. We employed a two-term cost function to encourage matching of both sulcal lines and the mean curvature between the spherical maps. Application of our registration framework to fifty pairwise registrations of T1-weighted MRI scans resulted in improved registration accuracy, which was computed from sulcal lines. Our registration approach was tested as an additional procedure to improve an existing surface registration algorithm. Our registration framework maintained an accurate registration over the sulcal lines while significantly increasing the cross-correlation of mean curvature between the spherical maps being registered. PMID:22366330
ERIC Educational Resources Information Center
Moses, Tim
2013-01-01
The purpose of this study was to evaluate the use of adjoined and piecewise linear approximations (APLAs) of raw equipercentile equating functions as a postsmoothing equating method. APLAs are less familiar than other postsmoothing equating methods (i.e., cubic splines), but their use has been described in historical equating practices of…
Butte, Nancy F.; Wong, William W.; Adolph, Anne L.; Puyau, Maurice R.; Vohra, Firoz A.; Zakeri, Issa F.
2010-01-01
Accurate, nonintrusive, and inexpensive techniques are needed to measure energy expenditure (EE) in free-living populations. Our primary aim in this study was to validate cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS) models based on observable participant characteristics, heart rate (HR), and accelerometer counts (AC) for prediction of minute-by-minute EE, and hence 24-h total EE (TEE), against a 7-d doubly labeled water (DLW) method in children and adolescents. Our secondary aim was to demonstrate the utility of CSTS and MARS to predict awake EE, sleep EE, and activity EE (AEE) from 7-d HR and AC records, because these shorter periods are not verifiable by DLW, which provides an estimate of the individual's mean TEE over a 7-d interval. CSTS and MARS models were validated in 60 normal-weight and overweight participants (ages 5–18 y). The Actiheart monitor was used to simultaneously measure HR and AC. For prediction of TEE, mean absolute errors were 10.7 ± 307 kcal/d and 18.7 ± 252 kcal/d for CSTS and MARS models, respectively, relative to DLW. Corresponding root mean square error values were 305 and 251 kcal/d for CSTS and MARS models, respectively. Bland-Altman plots indicated that the predicted values were in good agreement with the DLW-derived TEE values. Validation of CSTS and MARS models based on participant characteristics, HR monitoring, and accelerometry for the prediction of minute-by-minute EE, and hence 24-h TEE, against the DLW method indicated no systematic bias and acceptable limits of agreement for pediatric groups and individuals under free-living conditions. PMID:20573939
NASA Astrophysics Data System (ADS)
Pirpinia, Kleopatra; Bosman, Peter A. N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2015-03-01
The use of gradient information is well-known to be highly useful in single-objective optimization-based image registration methods. However, its usefulness has not yet been investigated for deformable image registration from a multi-objective optimization perspective. To this end, within a previously introduced multi-objective optimization framework, we use a smooth B-spline-based dual-dynamic transformation model that allows us to derive gradient information analytically, while still being able to account for large deformations. Within the multi-objective framework, we previously employed a powerful evolutionary algorithm (EA) that computes and advances multiple outcomes at once, resulting in a set of solutions (a so-called Pareto front) that represents efficient trade-offs between the objectives. With the addition of the B-spline-based transformation model, we studied the usefulness of gradient information in multiobjective deformable image registration using three different optimization algorithms: the (gradient-less) EA, a gradientonly algorithm, and a hybridization of these two. We evaluated the algorithms to register highly deformed images: 2D MRI slices of the breast in prone and supine positions. Results demonstrate that gradient-based multi-objective optimization significantly speeds up optimization in the initial stages of optimization. However, allowing sufficient computational resources, better results could still be obtained with the EA. Ultimately, the hybrid EA found the best overall approximation of the optimal Pareto front, further indicating that adding gradient-based optimization for multiobjective optimization-based deformable image registration can indeed be beneficial
Spectral methods for time dependent partial differential equations
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1983-01-01
The theory of spectral methods for time dependent partial differential equations is reviewed. When the domain is periodic Fourier methods are presented while for nonperiodic problems both Chebyshev and Legendre methods are discussed. The theory is presented for both hyperbolic and parabolic systems using both Galerkin and collocation procedures. While most of the review considers problems with constant coefficients the extension to nonlinear problems is also discussed. Some results for problems with shocks are presented.
Yao, Xin
Target Shape Design Optimization by Evolving Splines Pan Zhang, Xin Yao, Fellow, IEEE, Lei Jia, B, The University of Birmingham, Edgbaston, Birmingham B15 2TT, U.K. (e-mail: X.Yao@cs.bham.ac.uk). L. Jia
Myers, Carl W; Elkins, Ned Z
2008-01-01
Underground collocation of nuclear power reactors and the nuclear waste management facilities supporting those reactors, termed an underground nuclear park (UNP), appears to have several advantages compared to the conventional approach to siting reactors and waste management facilities. These advantages include the potential to lower reactor capital and operating cost, lower nuclear waste management cost, and increase margins of physical security and safety. Envirorunental impacts related to worker health, facility accidents, waste transportation, and sabotage and terrorism appear to be lower for UNPs compared to the current approach. In-place decommissioning ofUNP reactors appears to have cost, safety, envirorunental and waste disposal advantages. The UNP approach has the potential to lead to greater public acceptance for the deployment of new power reactors. Use of the UNP during the post-nuclear renaissance time frame has the potential to enable a greater expansion of U.S. nuclear power generation than might otherwise result. Technical and economic aspects of the UNP concept need more study to determine the viability of the concept.
NASA Astrophysics Data System (ADS)
Islam, Tanvir; Srivastava, Prashant K.
2015-08-01
The cloud ice water path (IWP) is one of the major parameters that have a strong influence on earth's radiation budget. Onboard satellite sensors are recognized as valuable tools to measure the IWP in a global scale. Albeit, active sensors such as the Cloud Profiling Radar (CPR) onboard the CloudSat satellite has better capability to measure the ice water content profile, thus, its vertical integral, IWP, than any passive microwave (MW) or infrared (IR) sensors. In this study, we investigate the retrieval of IWP from MW and IR sensors, including AMSU-A, MHS, and HIRS instruments on-board the N19 satellite, such that the retrieval is consistent with the CloudSat IWP estimates. This is achieved through the collocations between the passive satellite measurements and CloudSat scenes. Potential benefit of synergistic multi-sensor multi-frequency retrieval is investigated. Two modeling approaches are explored for the IWP retrieval - generalized linear model (GLM) and neural network (NN). The investigation has been carried out over both ocean and land surface types. The MW/IR synergy is found to be retrieved more accurate IWP than the individual AMSU-A, MHS, or HIRS measurements. Both GLM and NN approaches have been able to exploit the synergistic retrievals.
An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.
2015-04-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38?mJ?cm-3, and the prostate centroid deviation was 0.37 and 0.28?cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for the development of high-quality MRI-guided radiation therapy.
System and method of analyzing vibrations and identifying failure signatures in the vibrations
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor); Salvino, Liming W. (Inventor)
2008-01-01
An apparatus, computer program product and method of analyzing structures. Intrinsic Mode Functions (IMFs) are extracted from the data and the most energetic IMF is selected. A spline is fit to the envelope for the selected IMF. The spline derivative is determined. A stability spectrum is developed by separating the positive and negative results into two different spectra representing stable (positive) and unstable (negative) damping factors. The stability spectrum and the non-linearity indicator are applied to the data to isolate unstable vibrations.
an inherently meshless, exponential convergence, integration-free, boundary-only collocation techniques boundary knots are used. The efficiency and utility of this new technique are validated through a number1 Boundary knot method: A meshless, exponential convergence, integration-free, and boundary
Cell averaging Chebyshev methods for hyperbolic problems
NASA Technical Reports Server (NTRS)
Wei, Cai; Gottlieb, David; Harten, Ami
1990-01-01
A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.
NASA Astrophysics Data System (ADS)
Song, Hongwei; Li, Yong
2008-12-01
The calculation of the radial matrix elements of alkali metal Rydberg states is of interest with principal quantum number n up to 70. Until now calculations of the radial matrix elements have mainly concerned the states with n<30 . We use B -spline expansion technique and model potentials to calculate the radial matrix elements by numerical integration with 16 decimal digits precision. We are able to obtain the radial matrix elements of alkali metal Rydberg states with n up to 145, with five significant digits. As a test example, we also compute the positions and widths of the anticrossings for highly excited Stark states of Na with principal quantum number n up to 70.
NASA Astrophysics Data System (ADS)
Jiang, Wei Xiang; Cui, Tie Jun; Cheng, Qiang; Chin, Jessie Yao; Yang, Xin Mi; Liu, Ruopeng; Smith, David R.
2008-06-01
We study the design of arbitrarily shaped electromagnetic (EM) concentrators and their potential applications. To obtain closed-form formulas of EM parameters for an arbitrarily shaped concentrator, we employ nonuniform rational B-spline (NURBS) to represent the geometrical boundary. Using the conformally optical transformation of NURBS surfaces, we propose the analytical design of arbitrarily shaped concentrators, which are composed of anisotropic and inhomogeneous metamaterials with closed-form constitutive tensors. The designed concentrators are numerically validated by full-wave simulations, which show perfectly directed EM behaviors. As one of the potential applications, we demonstrate a way to amplify plane waves using a rectangular concentrator, which is much more efficient and easier than the existing techniques. Using NURBS expands the generality of the transformation optics and could lead toward making a very general tool that would interface with commercial softwares such as 3D STUDIOMAX and MAYA.
OPTIMAL HERMITE COLLOCATION APPLIED TO A ONE-DIMENSIONAL CONVECTION-DIFFUSION EQUATION USING
Brill, Stephen H.
to the steady state one-dimensional convection-diffusion equation (which can be used to model the transport refinement is sought. A hybrid method that utilizes a genetic algorithm and a hill-climbing approach is used. The genetic algorithm is used to determine a mesh refinement that is close to a locally optimal fea- sible
the optimal control problem into a nonlinear programming problem by discretizing the trajectory into a number. The method has also been used in determining finite-thrust spacecraft trajectories3 and optimal trajectories for multi-stage rockets in Ref. 4. The problem of low-thrust spacecraft trajectories is investigated in Refs
NASA Astrophysics Data System (ADS)
Farrell, M. E.; Russo, R. M.
2013-12-01
The installation of Earthscope Transportable Array-style geophysical observatories in Chile expands open data seismic recording capabilities in the southern hemisphere by nearly 30%, and has nearly tripled the number of seismic stations providing freely-available data in southern South America. Through the use of collocated seismic and atmospheric sensors at these stations we are able to analyze how local atmospheric conditions generate seismic noise, which can degrade data in seismic frequency bands at stations in the ';roaring forties' (S latitudes). Seismic vaults that are climate-controlled and insulated from the local environment are now employed throughout the world in an attempt to isolate seismometers from as many noise sources as possible. However, this is an expensive solution that is neither practical nor possible for all seismic deployments; and also, the increasing number and scope of temporary seismic deployments has resulted in the collection and archiving of terabytes of seismic data that is affected to some degree by natural seismic noise sources such as wind and atmospheric pressure changes. Changing air pressure can result in a depression and subsequent rebound of Earth's surface - which generates low frequency noise in seismic frequency bands - and even moderate winds can apply enough force to ground-coupled structures or to the surface above the seismometers themselves, resulting in significant noise. The 10 stations of the permanent Geophysical Reporting Observatories (GRO Chile), jointly installed during 2011-12 by IRIS and the Chilean Servicio Sismológico, include instrumentation in addition to the standard three seismic components. These stations, spaced approximately 300 km apart along the length of the country, continuously record a variety of atmospheric data including infrasound, air pressure, wind speed, and wind direction. The collocated seismic and atmospheric sensors at each station allow us to analyze both datasets together, to gain insight into how local atmospheric conditions couple with the ground to generate seismic noise, and to explore strategies for reducing this noise post data collection. Comparison of spectra of atmospheric data streams to the three broadband seismic channels for continuous signals recorded during May and June of 2013 shows high coherence between infrasound signals and time variation of air pressure (dP/dt) that we calculated from the air pressure data stream. Coherence between these signals is greatest for the east-west component of the seismic data in northern Chile. Although coherence between seismic, infrasound, and dP/dt is lower for all three seismic channels at other GRO Chile stations, for some of the data streams coherence can jump as much as 6 fold for certain frequency bands, with a common 3-fold increase for periods shorter than 10 seconds and the occasional 6-fold increase at long or very long periods.
Cho, Hyoun-Myoung; Yang, Ping; Kattawar, George W; Nasiri, Shaima L; Hu, Yongxiang; Minnis, Patrick; Trepte, Charles; Winker, David
2008-03-17
This paper reports on the relationship between lidar backscatter and the corresponding depolarization ratio for nine types of cloud systems. The data used in this study are the lidar returns measured by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) aboard the Cloud- Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite and the collocated cloud products derived from the observations made by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua satellite. Specifically, the operational MODIS cloud optical thickness and cloud-top pressure products are used to classify cloud types on the basis of the International Satellite Cloud Climatology Project (ISCCP) cloud classification scheme. While the CALIPSO observations provide information for up to 10 cloud layers, in the present study only the uppermost clouds are considered. The layer-averaged attenuated backscatter (gamma') and layer-averaged depolarization ratio (delta) from the CALIPSO measurements show both water- and ice-phase features for global cirrus, cirrostratus, and deep convective cloud classes. Furthermore, we screen both the MODIS and CALIPSO data to eliminate cases in which CALIPSO detected two- or multi-layered clouds. It is shown that low gamma' values corresponding to uppermost thin clouds are largely eliminated in the CALIPSO delta-gamma' relationship for single-layered clouds. For mid-latitude and polar regions corresponding, respectively, to latitude belts 30 degrees -60 degrees and 60 degrees -90 degrees in both the hemispheres, a mixture of water and ice is also observed in the case of the altostratus class. MODIS cloud phase flags are also used to screen ice clouds. The resultant water clouds flagged by the MODIS algorithm show only water phase feature in the delta-gamma' relation observed by CALIOP; however, in the case of the ice clouds flagged by the MODIS algorithm, the co-existence of ice- and water-phase clouds is still observed in the CALIPSO delta-gamma' relationship. PMID:18542490
Kirby, Mike
AND REDUCTION OF QUADRATURE ERRORS IN MPM 923 by solving Newton's laws of motion for the internal force due of motion can hamper spatial convergence of the method. We propose the use of a quadratic B-spline basis the equations of motion. MPM attempts to marry the best of both worlds--use of Lagrangian particles
Baltazar-Cervantes, J.C.; Claridge, D.E.
2002-01-01
AND FOURIER SERIES AS INTERPOLATION TECHNIQUES FOR FILLING IN SHORT PERIODS OF MISSING BUILDING ENERGY USE AND WEATHER DATA Juan-Carlos Baltazar and David E. Claridge Energy Systems Laboratory Texas A&M University College Station, TX 77843...-3581 ABSTRACT A study of cubic splines and Fourier series as interpolation techniques for filling in missing data in energy and meteorological time series is presented. The followed procedure created artificially missing points (pseudo-gaps) in measured...
NASA Astrophysics Data System (ADS)
Wentz, T.; Fayad, H.; Bert, J.; Pradier, O.; Clement, J. F.; Vourch, S.; Boussion, N.; Visvikis, D.
2012-07-01
Time-of-flight (ToF) camera technology provides a real-time depth map of a scene with adequate frequency for the monitoring of physiological patient motion. However, dynamic surface motion estimation using a ToF camera is limited by issues such as the raw measurement accuracy and the absence of fixed anatomical landmarks. In this work we propose to overcome these limitations using surface modeling through B-splines. This approach was assessed in terms of both motion estimation accuracy and associated variability improvements using acquisitions of an anthropomorphic surface phantom for a range of observation distances (0.6-1.4 m). In addition, feasibility was demonstrated on patient acquisitions. Using the proposed B-spline modeling, the mean motion estimation error and associated repeatability with respect to the raw measurements decreased by a factor of 3. Significant correlation was found between patients’ surfaces motion extracted using the proposed B-spline approach applied to the ToF data and the one extracted from synchronized 4D-CT acquisitions as the ground truth. ToF cameras represent a promising alternative for contact-less patient surface monitoring for respiratory motion synchronization or modeling in imaging and/or radiotherapy applications.
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
NASA Astrophysics Data System (ADS)
Smith, P. M.; Kempler, S. J.; Leptoukh, G. G.; Savtchenko, A. K.; Kummerer, R. J.
2008-12-01
The Goddard Earth Sciences DISC (Data and Information Services Center) actively supports A-train mission researchers by providing display and data download access to a substantial number of cloud/aerosol, temperature and pressure parameters measured by multiple sensors for platforms in the Atrain satellite constellation. Instruments supported include Cloudsat, CALIPSO, MODIS, AIRS, OMI, MLS, and POLDER together with model data from GDAS and ECWMF with temporal coverage June 2006 through present. Our Giovanni tool provides users with the capability of accessing, displaying and downloading subsetted multi-parameter data which has been automatically collocated both spatially and temporally with the Cloudsat instrument's sub- orbital track. Image inter-comparison products are provided for both vertical profiles and narrow horizontal data swaths. This subsetted data may be downloaded in HDF4, PNG or Google Earth KMZ file format. Users may also download time series collocated data from an FTP site. Sample cloud precipitation products measured by multiple A-train instruments will be presented.
A conservative staggered-grid Chebyshev multidomain method for compressible flows
NASA Technical Reports Server (NTRS)
Kopriva, David A.; Kolias, John H.
1995-01-01
We present a new multidomain spectral collocation method that uses staggered grids for the solution of compressible flow problems. The solution unknowns are defined at the nodes of a Gauss quadrature rule. The fluxes are evaluated at the nodes of a Gauss-Lobatto rule. The method is conservative, free-stream preserving, and exponentially accurate. A significant advantage of the method is that subdomain corners are not included in the approximation, making solutions in complex geometries easier to compute.
Simultaneous Collocated Photometry
NASA Astrophysics Data System (ADS)
Calderwood, T.; Getz, E.; Holcomb, E.
2015-12-01
Two telescopes equipped with single channel photometers are operated side-by-side, observing the same stars, to evaluate the consistency of their results. In fifteen paired V band observations, we find that the median absolute difference between the two systems is 6 mmag, and that they always agree within 2? errors.
NASA Astrophysics Data System (ADS)
Zhang, Zhijun; Liu, Feng; Deng, Fuqin; Tsui, Hungtat
2014-11-01
Due to the variance between subjects, there is usually ambiguity in intensity-based intersubject registration. The topological constraint in the brain cortical surface might be violated because of the highly convolved nature of the human cortical cortex. We propose an intersubject brain registration method by combining the intensity and the geodesic closest point-based similarity measurements. Each of the brain hemispheres can be topologically equal to a sphere and a one-to-one mapping of the points on the spherical surfaces of the two subjects can be achieved. The correspondences in the cortical surface are obtained by searching the geodesic closest points in the spherical surface. The corresponding features on the cortical surfaces between subjects are then used as anatomical landmarks for intersubject registration. By adding these anatomical constraints of the cortical surfaces, the intersubject registration results are more anatomically plausible and accurate. We validate our method by using real human datasets. Experimental results in visual inspection and alignment error show that the proposed method performs better than the typical joint intensity- and landmark-distance-based methods.
Comparison of the constant and linear boundary element method for EEG and MEG forward modeling
Mosher, J.C.; Chang, C.H.; Leahy, R.M.
1996-07-01
We present a comparison of boundary element methods for solving the forward problem in EEG and MEG. We use the method of weighted residuals and focus on the collocation and Galerkin forms for constant and linear basis functions. We also examine the effect of the isolated skull approach for reducing numerical errors due to the low conductivity of the skull. We demonstrate the improvement that a linear Galerkin approach may yield in solving the forward problem.
NASA Astrophysics Data System (ADS)
Emamgolizadeh, S.; Bateni, S. M.; Shahsavani, D.; Ashrafi, T.; Ghorbani, H.
2015-10-01
The soil cation exchange capacity (CEC) is one of the main soil chemical properties, which is required in various fields such as environmental and agricultural engineering as well as soil science. In situ measurement of CEC is time consuming and costly. Hence, numerous studies have used traditional regression-based techniques to estimate CEC from more easily measurable soil parameters (e.g., soil texture, organic matter (OM), and pH). However, these models may not be able to adequately capture the complex and highly nonlinear relationship between CEC and its influential soil variables. In this study, Genetic Expression Programming (GEP) and Multivariate Adaptive Regression Splines (MARS) were employed to estimate CEC from more readily measurable soil physical and chemical variables (e.g., OM, clay, and pH) by developing functional relations. The GEP- and MARS-based functional relations were tested at two field sites in Iran. Results showed that GEP and MARS can provide reliable estimates of CEC. Also, it was found that the MARS model (with root-mean-square-error (RMSE) of 0.318 Cmol+ kg-1 and correlation coefficient (R2) of 0.864) generated slightly better results than the GEP model (with RMSE of 0.270 Cmol+ kg-1 and R2 of 0.807). The performance of GEP and MARS models was compared with two existing approaches, namely artificial neural network (ANN) and multiple linear regression (MLR). The comparison indicated that MARS and GEP outperformed the MLP model, but they did not perform as good as ANN. Finally, a sensitivity analysis was conducted to determine the most and the least influential variables affecting CEC. It was found that OM and pH have the most and least significant effect on CEC, respectively.
From Particle Tracks to Velocity and Acceleration Fields Using B-Splines and Penalties
Gesemann, Sebastian
2015-01-01
In this work a method for reconstructing velocity and acceleration fields is described which uses scattered particle tracking data from flow experiments as input. The goal is to reconstruct these fields faithfully with a limited amount of compute time and exploit known flow properties such as a divergence-free velocity field for incompressible flows and a rotation-free acceleration in case it is known to be dominated by the pressure gradient in order to improve the spatial resolution of the reconstruction.
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
A Linear Mixed Model Spline Framework for Analysing Time Course ‘Omics’ Data
Straube, Jasmin; Gorse, Alain-Dominique
2015-01-01
Time course ‘omics’ experiments are becoming increasingly important to study system-wide dynamic regulation. Despite their high information content, analysis remains challenging. ‘Omics’ technologies capture quantitative measurements on tens of thousands of molecules. Therefore, in a time course ‘omics’ experiment molecules are measured for multiple subjects over multiple time points. This results in a large, high-dimensional dataset, which requires computationally efficient approaches for statistical analysis. Moreover, methods need to be able to handle missing values and various levels of noise. We present a novel, robust and powerful framework to analyze time course ‘omics’ data that consists of three stages: quality assessment and filtering, profile modelling, and analysis. The first step consists of removing molecules for which expression or abundance is highly variable over time. The second step models each molecular expression profile in a linear mixed model framework which takes into account subject-specific variability. The best model is selected through a serial model selection approach and results in dimension reduction of the time course data. The final step includes two types of analysis of the modelled trajectories, namely, clustering analysis to identify groups of correlated profiles over time, and differential expression analysis to identify profiles which differ over time and/or between treatment groups. Through simulation studies we demonstrate the high sensitivity and specificity of our approach for differential expression analysis. We then illustrate how our framework can bring novel insights on two time course ‘omics’ studies in breast cancer and kidney rejection. The methods are publicly available, implemented in the R CRAN package lmms. PMID:26313144
NASA Technical Reports Server (NTRS)
Gatewood, B. E.
1971-01-01
The linearized integral equation for the Foucault test of a solid mirror was solved by various methods: power series, Fourier series, collocation, iteration, and inversion integral. The case of the Cassegrain mirror was solved by a particular power series method, collocation, and inversion integral. The inversion integral method appears to be the best overall method for both the solid and Cassegrain mirrors. Certain particular types of power series and Fourier series are satisfactory for the Cassegrain mirror. Numerical integration of the nonlinear equation for selected surface imperfections showed that results start to deviate from those given by the linearized equation at a surface deviation of about 3 percent of the wavelength of light. Several possible procedures for calibrating and scaling the input data for the integral equation are described.
Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G
2013-01-01
Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. PMID:23084890
Black hole evolution by spectral methods
NASA Astrophysics Data System (ADS)
Kidder, Lawrence E.; Scheel, Mark A.; Teukolsky, Saul A.; Carlson, Eric D.; Cook, Gregory B.
2000-10-01
Current methods of evolving a spacetime containing one or more black holes are plagued by instabilities that prohibit long-term evolution. Some of these instabilities may be due to the numerical method used, traditionally finite differencing. In this paper, we explore the use of a pseudospectral collocation (PSC) method for the evolution of a spherically symmetric black hole spacetime in one dimension using a hyperbolic formulation of Einstein's equations. We demonstrate that our PSC method is able to evolve a spherically symmetric black hole spacetime forever without enforcing constraints, even if we add dynamics via a Klein-Gordon scalar field. We find that, in contrast with finite-differencing methods, black hole excision is a trivial operation using PSC applied to a hyperbolic formulation of Einstein's equations. We discuss the extension of this method to three spatial dimensions.
Nitsche's method for two and three dimensional NURBS patch coupling
NASA Astrophysics Data System (ADS)
Nguyen, Vinh Phu; Kerfriden, Pierre; Brino, Marco; Bordas, Stéphane P. A.; Bonisoli, Elvio
2014-06-01
We present a Nitche's method to couple non-conforming two and three-dimensional non uniform rational b-splines (NURBS) patches in the context of isogeometric analysis. We present results for linear elastostatics in two and and three-dimensions. The method can deal with surface-surface or volume-volume coupling, and we show how it can be used to handle heterogeneities such as inclusions. We also present preliminary results on modal analysis. This simple coupling method has the potential to increase the applicability of NURBS-based isogeometric analysis for practical applications.
A discrete time method to the first variation of fractional order variational functionals
NASA Astrophysics Data System (ADS)
Pooseh, Shakoor; Almeida, Ricardo; Torres, Delfim F. M.
2013-10-01
The fact that the first variation of a variational functional must vanish along an extremizer is the base of most effective solution schemes to solve problems of the calculus of variations. We generalize the method to variational problems involving fractional order derivatives. First order splines are used as variations, for which fractional derivatives are known. The Grünwald-Letnikov definition of fractional derivative is used, because of its intrinsic discrete nature that leads to straightforward approximations.
NASA Technical Reports Server (NTRS)
Bartolone, Anthony; Trujillo, Anna
2002-01-01
NASA Langley Research Center has been researching ways to improve flight crew decision aiding for systems management. Our current investigation is how to display a wide variety of aircraft parameters in ways that will improve the flight crew's situation awareness. To accomplish this, new means are being explored that will monitor the overall health of a flight and report the current status of the aircraft and forecast impending problems to the pilots. The initial step in this research was to conduct a survey addressing how current glass-cockpit commercial pilots would value a prediction of the status of critical aircraft systems. We also addressed how this new type of data ought to be conveyed and utilized. Therefore, two other items associated with predictive information were also included in the survey. The first addressed the need for system status, alerts and procedures, and system controls to be more logically grouped together, or collocated, on the flight deck. The second idea called for the survey respondents opinions on the functionality of mission status graphics; a display methodology that groups a variety of parameters onto a single display that can instantaneously convey a complete overview of both an aircraft's system and mission health.
Kolli, R Prakash; Seidman, David N
2014-12-01
The composition of co-precipitated and collocated NbC carbide precipitates, Fe3C iron carbide (cementite), and Cu-rich precipitates are studied experimentally by atom-probe tomography (APT). The Cu-rich precipitates located at a grain boundary (GB) are also studied. The APT results for the carbides are supplemented with computational thermodynamics predictions of composition at thermodynamic equilibrium. Two types of NbC carbide precipitates are distinguished based on their stoichiometric ratio and size. The Cu-rich precipitates at the periphery of the iron carbide and at the GB are larger than those distributed in the ?-Fe (body-centered cubic) matrix, which is attributed to short-circuit diffusion of Cu along the GB. Manganese segregation is not observed at the heterophase interfaces of the Cu-rich precipitates that are located at the periphery of the iron carbide or at the GB, which is unlike those located at the edge of the NbC carbide precipitates or distributed in the ?-Fe matrix. This suggests the presence of two populations of NiAl-type (B2 structure) phases at the heterophase interfaces in multicomponent Fe-Cu steels. PMID:25254942
NASA Technical Reports Server (NTRS)
Huang, Xianglei; Yang, Wenze; Loeb, Norman G.; Ramaswamy, V.
2008-01-01
Spectrally resolved outgoing IR flux, the integrand of the outgoing longwave radiation (OLR), has its unique value in evaluating model simulations. Here we describe an algorithm of deriving such clear-sky outgoing spectral flux through the whole IR region from the collocated Atmospheric Infrared Sounder (AIRS) and the Clouds & the Earth's Radiant Energy System (CERES) measurements over the tropical oceans. Based on the scene types and corresponding angular distribution models (ADMs) used in the CERES Single Satellite Footprint (SSF) dataset, spectrally-dependent ADMs are developed and used to estimate the spectral flux at each AIRS channel. A multivariate linear prediction scheme is then used to estimate spectral fluxes at frequencies not covered by the AIRS instrument. The whole algorithm is validated using synthetic spectra as well as the CERES OLR measurements. Using the GFDL AM2 model simulation as a case study, the application of the derived clear-sky outgoing spectral flux in model evaluation is illustrated. By comparing the observed and simulated spectral flux in 2004, compensating errors in the simulated OLR from different absorption bands can be revealed, so does the errors from frequencies within a given absorption band. Discrepancies between the simulated and observed spatial distributions and seasonal evolutions of the spectral fluxes at different spectral ranges are further discussed. The methodology described in this study can be applied to other surface types as well as cloudy-sky observations and corresponding model evaluations.
NASA Astrophysics Data System (ADS)
Swaidan, Waleeda; Hussin, Amran
2015-10-01
Most direct methods solve finite time horizon optimal control problems with nonlinear programming solver. In this paper, we propose a numerical method for solving nonlinear optimal control problem with state and control inequality constraints. This method used quasilinearization technique and Haar wavelet operational matrix to convert the nonlinear optimal control problem into a quadratic programming problem. The linear inequality constraints for trajectories variables are converted to quadratic programming constraint by using Haar wavelet collocation method. The proposed method has been applied to solve Optimal Control of Multi-Item Inventory Model. The accuracy of the states, controls and cost can be improved by increasing the Haar wavelet resolution.
Spectral methods and their implementation to solution of aerodynamic and fluid mechanic problems
NASA Technical Reports Server (NTRS)
Streett, C. L.
1987-01-01
Fundamental concepts underlying spectral collocation methods, especially pertaining to their use in the solution of partial differential equations, are outlined. Theoretical accuracy results are reviewed and compared with results from test problems. A number of practical aspects of the construction and use of spectral methods are detailed, along with several solution schemes which have found utility in applications of spectral methods to practical problems. Results from a few of the successful applications of spectral methods to problems of aerodynamic and fluid mechanic interest are then outlined, followed by a discussion of the problem areas in spectral methods and the current research under way to overcome these difficulties.
Frey, Pascal
Numerical simulation of the water bubble rising in a liquid column using the combination of level in revised form 25 February 2012 Accepted 10 April 2012 Available online 14 May 2012 Keywords: Level set and heat transfer. In this paper, the bubble behaviours are studied using the combination of Level Set
Schaly, B; Bauman, G S; Battista, J J; Van Dyk, J
2005-02-01
The goal of this study is to validate a deformable model using contour-driven thin-plate splines for application to radiation therapy dose mapping. Our testing includes a virtual spherical phantom as well as real computed tomography (CT) data from ten prostate cancer patients with radio-opaque markers surgically implanted into the prostate and seminal vesicles. In the spherical mathematical phantom, homologous control points generated automatically given input contour data in CT slice geometry were compared to homologous control point placement using analytical geometry as the ground truth. The dose delivered to specific voxels driven by both sets of homologous control points were compared to determine the accuracy of dose tracking via the deformable model. A 3D analytical spherically symmetric dose distribution with a dose gradient of approximately 10% per mm was used for this phantom. This test showed that the uncertainty in calculating the delivered dose to a tissue element depends on slice thickness and the variation in defining homologous landmarks, where dose agreement of 3-4% in high dose gradient regions was achieved. In the patient data, radio-opaque marker positions driven by the thin-plate spline algorithm were compared to the actual marker positions as identified in the CT scans. It is demonstrated that the deformable model is accurate (approximately 2.5 mm) to within the intra-observer contouring variability. This work shows that the algorithm is appropriate for describing changes in pelvic anatomy and for the dose mapping application with dose gradients characteristic of conformal and intensity modulated radiation therapy. PMID:15773723
NASA Astrophysics Data System (ADS)
Rubio-Zuazo, J. R.; Castro, G. R.
2005-07-01
In this contribution we present the actual status of the SpLine project devoted to the implementation of hard (?15 keV) X-ray photoelectron spectroscopy (HAXPES) in combination with surface X-ray diffraction (SXRD) at the Spanish CRG beamline (SpLine) at the European Synchrotron Radiation Facility (ESRF). The beamline is located at the bending magnet D25 at the ESRF and can be operated in the X-ray energy range 5-45 keV. The main project goals are the detection of very high kinetic energy photoelectrons up to 15 keV, in particular the simultaneous detection of the diffracted photons and photo-emitted electrons. Therefore, special effort has been devoted to develop a novel electron analyzer, capable of working at very high as well as low energies. The analyzer is a sector of a Cylindrical Mirror Analyzer (CSA300HV) with a five-elements retarding-lens system and a very compact size compared to standard hemispherical analyzers. Additionally, an ultra-high-vacuum system has been constructed which will simultaneously fulfill the requirements for HAXPES and SXRD. The vacuum chamber has two Be windows so that the in-coming and out-going X-ray beam will hit the sample and the X-ray detector, respectively. The complete system will be installed on a massive 2S+3D diffractometer. Photoelectron spectroscopy and SXRD can be operated either simultaneously or independently from each other. Test experiments with a UV discharge lamp and a RHEED electron gun have been conducted demonstrating that the analyzer performs satisfactorily. The whole set-up is in the commissioning phase and full operation is expected in the course of 2005.
Evaluation of preferred lightness rescaling methods for colour reproduction
NASA Astrophysics Data System (ADS)
Chang, Yerin
2012-01-01
In cross-media colour reproduction, it is common goal achieving media-relative reproduction. From the ICC specification, this often accomplished by linearly scaling XYZ data so that the media white of the source data matches that of the destination data. However, in this approach the media black points are not explicitly aligned. To compensate this problem, it is common to apply a black point compensation (BPC) procedure to improve the mapping of the black points. First, three lightness rescaling methods were chosen: linear, sigmoidal and spline. CIECAM02 was also implemented in an approach of a lightness rescaling method; simply, lightness values from results produced by CIECAM02 handle as if reproduced lightness values of an output image. With a chosen image set, above five different methods were implemented. A paired-comparison psychophysical experiment was performed to evaluate performances of the lightness rescaling methods. In most images, the Adobe's BPC, linear and Spline lightness rescaling methods are preferred over the CIECAM02 and sigmoidal lightness rescaling methods. The confidence interval for the single image set is +/-0.36. With this confidence interval, it is difficult to conclude the Adobe BPC' method works better, but not significantly so. However, for the overall results, as every single observation is independent to each other, the result was presented with the confidence interval of +/-0.0763. Based on the overall result, the Adobe's BPC method performs best.
. Young1 , Andrea Bunt1 , Susan Prentice2 1 Computer Science University of Manitoba Winnipeg, Manitoba, Canada {stela.seo, young, bunt} @cs.umanitoba.ca 2 Sociology University of Manitoba Winnipeg, Manitoba and methods INTRODUCTION Gender studies is a field of inquiry that uses a person's sex or gender identity
An advanced panel method for analysis of arbitrary configurations in unsteady subsonic flow
NASA Technical Reports Server (NTRS)
Dusto, A. R.; Epton, M. A.
1980-01-01
An advanced method is presented for solving the linear integral equations for subsonic unsteady flow in three dimensions. The method is applicable to flows about arbitrary, nonplanar boundary surfaces undergoing small amplitude harmonic oscillations about their steady mean locations. The problem is formulated with a wake model wherein unsteady vorticity can be convected by the steady mean component of flow. The geometric location of the unsteady source and doublet distributions can be located on the actual surfaces of thick bodies in their steady mean locations. The method is an outgrowth of a recently developed steady flow panel method and employs the linear source and quadratic doublet splines of that method.
A Fourier-Legendre spectral element method in polar coordinates
NASA Astrophysics Data System (ADS)
Qiu, Zhouhua; Zeng, Zhong; Mei, Huan; Li, Liang; Yao, Liping; Zhang, Liangqi
2012-01-01
In this paper, a new Fourier-Legendre spectral element method based on the Galerkin formulation is proposed to solve the Poisson-type equations in polar coordinates. The 1/ r singularity at r = 0 is avoided by using Gauss-Radau type quadrature points. In order to break the time-step restriction in the time-dependent problems, the clustering of collocation points near the pole is prevented through the technique of domain decomposition in the radial direction. A number of Poisson-type equations subject to the Dirichlet or Neumann boundary condition are computed and compared with the results in literature, which reveals a desirable result.
An adaptive pseudo-spectral method for reaction diffusion problems
NASA Technical Reports Server (NTRS)
Bayliss, A.; Matkowsky, B. J.; Gottlieb, D.; Minkoff, M.
1989-01-01
The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.
An adaptive pseudo-spectral method for reaction diffusion problems
NASA Technical Reports Server (NTRS)
Bayliss, A.; Gottlieb, D.; Matkowsky, B. J.; Minkoff, M.
1987-01-01
The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.
NASA Technical Reports Server (NTRS)
Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Smirnov, Alexander
2013-01-01
In this study, aerosol optical depths over oceans are analyzed from satellite and surface perspectives. Multiangle Imaging SpectroRadiometer (MISR) aerosol retrievals are investigated and validated primarily against Maritime Aerosol Network (MAN) observations. Furthermore, AErosol RObotic NETwork (AERONET) data from 19 island and coastal sites is incorporated in this study. The 270 MISRMAN comparison points scattered across all oceans were identified. MISR on average overestimates aerosol optical depths (AODs) by 0.04 as compared to MAN; the correlation coefficient and root-mean-square error are 0.95 and 0.06, respectively. A new screening procedure based on retrieval region characterization is proposed, which is capable of substantially reducing MISR retrieval biases. Over 1000 additional MISRAERONET comparison points are added to the analysis to confirm the validity of the method. The bias reduction is effective within all AOD ranges. Setting a clear flag fraction threshold to 0.6 reduces the bias to below 0.02, which is close to a typical ground-based measurement uncertainty. Twelve years of MISR data are analyzed with the new screening procedure. The average over ocean AOD is reduced by 0.03, from 0.15 to 0.12. The largest AOD decrease is observed in high latitudes of both hemispheres, regions with climatologically high cloud cover. It is postulated that the screening procedure eliminates spurious retrieval errors associated with cloud contamination and cloud adjacency effects. The proposed filtering method can be used for validating aerosol and chemical transport models.
Meshless Local Petrov-Galerkin Method for Bending Problems
NASA Technical Reports Server (NTRS)
Phillips, Dawn R.; Raju, Ivatury S.
2002-01-01
Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.
Designing illumination lenses and mirrors by the numerical solution of Monge-Ampère equations.
Brix, Kolja; Hafizogullari, Yasemin; Platen, Andreas
2015-11-01
We consider the inverse refractor and the inverse reflector problem. The task is to design a free-form lens or a free-form mirror that, when illuminated by a point light source, produces a given illumination pattern on a target. Both problems can be modeled by strongly nonlinear second-order partial differential equations of Monge-Ampère type. In [Math. Models Methods Appl. Sci.25, 803 (2015)MMMSEU0218-202510.1142/S0218202515500190], the authors have proposed a B-spline collocation method, which has been applied to the inverse reflector problem. Now this approach is extended to the inverse refractor problem. We explain in depth the collocation method and how to handle boundary conditions and constraints. The paper concludes with numerical results of refracting and reflecting optical surfaces and their verification via ray tracing. PMID:26560938
Treatment of domain integrals in boundary element methods
Nintcheu Fata, Sylvain
2012-01-01
A systematic and rigorous technique to calculate domain integrals without a volume-fitted mesh has been developed and validated in the context of a boundary element approximation. In the proposed approach, a domain integral involving a continuous or weakly-singular integrand is first converted into a surface integral by means of straight-path integrals that intersect the underlying domain. Then, the resulting surface integral is carried out either via analytic integration over boundary elements or by use of standard quadrature rules. This domain-to-boundary integral transformation is derived from an extension of the fundamental theorem of calculus to higher dimension, and the divergence theorem. In establishing the method, it is shown that the higher-dimensional version of the first fundamental theorem of calculus corresponds to the well-known Poincare lemma. The proposed technique can be employed to evaluate integrals defined over simply- or multiply-connected domains with Lipschitz boundaries which are embedded in an Euclidean space of arbitrary but finite dimension. Combined with the singular treatment of surface integrals that is widely available in the literature, this approach can also be utilized to effectively deal with boundary-value problems involving non-homogeneous source terms by way of a collocation or a Galerkin boundary integral equation method using only the prescribed surface discretization. Sample problems associated with the three-dimensional Poisson equation and featuring the Newton potential are successfully solved by a constant element collocation method to validate this study.
Ubiquitous Media for Collocated Interaction
NASA Astrophysics Data System (ADS)
Jacucci, Giulio; Peltonen, Peter; Morrison, Ann; Salovaara, Antti; Kurvinen, Esko; Oulasvirta, Antti
Has ubiquitous computing entered our lives as anticipated in the early 90s or at the turn of the millennium? In this last decade, the processing of media combined with sensing and communication capabilities has been slowly entering our lives through powerful smartphones, multimodal game consoles, instrumented cars, and large displays pervading public spaces. However, the visionary formulations (Weiser 1991) and updated scenarios (Abowd and Mynatt 2000) have not been realized, despite the fact that the technology has become increasingly accessible.
Zubac, Z; Fostier, J; De Zutter, D; Vande Ginste, D
2015-11-30
It is well-known that geometrical variations due to manufacturing tolerances can degrade the performance of optical devices. In recent literature, polynomial chaos expansion (PCE) methods were proposed to model this statistical behavior. Nonetheless, traditional PCE solvers require a lot of memory and their computational complexity leads to prohibitively long simulation times, making these methods non-tractable for large optical systems. The uncertainty quantification (UQ) of various types of large, two-dimensional lens systems is presented in this paper, leveraging a novel parallelized Multilevel Fast Multipole Method (MLFMM) based Stochastic Galerkin Method (SGM). It is demonstrated that this technique can handle large optical structures in reasonable time, e.g., a stochastic lens system with more than 10 million unknowns was solved in less than an hour by using 3 compute nodes. The SGM, which is an intrusive PCE method, guarantees the accuracy of the method. By conjunction with MLFMM, usage of a preconditioner and by constructing and implementing a parallelized algorithm, a high efficiency is achieved. This is demonstrated with parallel scalability graphs. The novel approach is illustrated for different types of lens system and numerical results are validated against a collocation method, which relies on reusing a traditional deterministic solver. The last example concerns a Cassegrain system with five random variables, for which a speed-up of more than 12× compared to a collocation method is achieved. PMID:26698717
An image mosaic method based on corner
NASA Astrophysics Data System (ADS)
Jiang, Zetao; Nie, Heting
2015-08-01
In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.
Fairweather, G.; Vedha-Nayagam, M.
1987-01-01
The use of robust general-purpose user-oriented software packages for solving systems of two-point boundary-value problems for ordinary differential equations (BVODEs) is considered for the solution of BVODEs that commonly arise in the modeling of problems in heat transfer. This investigation involves three packages, DVCPR, COLSYS, and DTPTB, which implement the trapezoidal rule with deferred corrections, the method of spline collocation at Gaussian points, and multiple shooting, respectively. The effectiveness of the packages is demonstrated by examining their performance on a variety of BVODEs that have appeared in the literature.
Zakeri, Issa F; Adolph, Anne L; Puyau, Maurice R; Vohra, Firoz A; Butte, Nancy F
2013-01-01
Prediction equations of energy expenditure (EE) using accelerometers and miniaturized heart rate (HR) monitors have been developed in older children and adults but not in preschool-aged children. Because the relationships between accelerometer counts (ACs), HR, and EE are confounded by growth and maturation, age-specific EE prediction equations are required. We used advanced technology (fast-response room calorimetry, Actiheart and Actigraph accelerometers, and miniaturized HR monitors) and sophisticated mathematical modeling [cross-sectional time series (CSTS) and multivariate adaptive regression splines (MARS)] to develop models for the prediction of minute-by-minute EE in 69 preschool-aged children. CSTS and MARS models were developed by using participant characteristics (gender, age, weight, height), Actiheart (HR+AC_x) or ActiGraph parameters (AC_x, AC_y, AC_z, steps, posture) [x, y, and z represent the directional axes of the accelerometers], and their significant 1- and 2-min lag and lead values, and significant interactions. Relative to EE measured by calorimetry, mean percentage errors predicting awake EE (-1.1 ± 8.7%, 0.3 ± 6.9%, and -0.2 ± 6.9%) with CSTS models were slightly higher than with MARS models (-0.7 ± 6.0%, 0.3 ± 4.8%, and -0.6 ± 4.6%) for Actiheart, ActiGraph, and ActiGraph+HR devices, respectively. Predicted awake EE values were within ±10% for 81-87% of individuals for CSTS models and for 91-98% of individuals for MARS models. Concordance correlation coefficients were 0.936, 0.931, and 0.943 for CSTS EE models and 0.946, 0.948, and 0.940 for MARS EE models for Actiheart, ActiGraph, and ActiGraph+HR devices, respectively. CSTS and MARS models should prove useful in capturing the complex dynamics of EE and movement that are characteristic of preschool-aged children. PMID:23190760
The surface Laplacian technique in EEG: Theory and methods.
Carvalhaes, Claudio; de Barros, J Acacio
2015-09-01
This paper reviews the method of surface Laplacian differentiation to study EEG. We focus on topics that are helpful for a clear understanding of the underlying concepts and its efficient implementation, which is especially important for EEG researchers unfamiliar with the technique. The popular methods of finite difference and splines are reviewed in detail. The former has the advantage of simplicity and low computational cost, but its estimates are prone to a variety of errors due to discretization. The latter eliminates all issues related to discretization and incorporates a regularization mechanism to reduce spatial noise, but at the cost of increasing mathematical and computational complexity. These and several other issues deserving further development are highlighted, some of which we address to the extent possible. Here we develop a set of discrete approximations for Laplacian estimates at peripheral electrodes. We also provide the mathematical details of finite difference approximations that are missing in the literature, and discuss the problem of computational performance, which is particularly important in the context of EEG splines where data sets can be very large. Along this line, the matrix representation of the surface Laplacian operator is carefully discussed and some figures are given illustrating the advantages of this approach. In the final remarks, we briefly sketch a possible way to incorporate finite-size electrodes into Laplacian estimates that could guide further developments. PMID:25962714
NASA Astrophysics Data System (ADS)
Engwell, S. L.; Aspinall, W. P.; Sparks, R. S. J.
2015-07-01
Characterization of explosive volcanic eruptive processes from interpretation of deposits is a key for assessing volcanic hazard and risk, particularly for infrequent large explosive eruptions and those whose deposits are transient in the geological record. While eruption size—determined by measurement and interpretation of tephra fall deposits—is of particular importance, uncertainties for such measurements and volume estimates are rarely presented. Here, tephra volume estimates are derived from isopach maps produced by modeling raw thickness data as cubic B-spline curves under tension. Isopachs are objectively determined in relation to original data and enable limitations in volume estimates from published maps to be investigated. The eruption volumes derived using spline isopachs differ from selected published estimates by 15-40 %, reflecting uncertainties in the volume estimation process. The formalized analysis enables identification of sources of uncertainty; eruptive volume uncertainties (>30 %) are much greater than thickness measurement uncertainties (~10 %). The number of measurements is a key factor in volume estimate uncertainty, regardless of method utilized for isopach production. Deposits processed using the cubic B-spline method are well described by 60 measurements distributed across each deposit; however, this figure is deposit and distribution dependent, increasing for geometrically complex deposits, such as those exhibiting bilobate dispersion.
A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis
Kang, Mengjun
2015-01-01
A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691
A hybrid biomechanical model-based image registration method for sliding objects
NASA Astrophysics Data System (ADS)
Han, Lianghao; Hawkes, David; Barratt, Dean
2014-03-01
The sliding motion between two anatomic structures, such as lung against chest wall, liver against surrounding tissues, produces a discontinuous displacement field between their boundaries. Capturing the sliding motion is quite challenging for intensity-based image registration methods in which a smoothness condition has commonly been applied to ensure the deformation consistency of neighborhood voxels. Such a smoothness constraint contradicts motion physiology at the boundaries of these anatomic structures. Although various regularisation schemes have been developed to handle sliding motion under the framework of non-rigid intensity-based image registration, the recovered displacement field may still not be physically plausible. In this study, a new framework that incorporates a patient-specific biomechanical model with a non-rigid image registration scheme for motion estimation of sliding objects has been developed. The patient-specific model provides the motion estimation with an explicit simulation of sliding motion, while the subsequent non-rigid image registration compensates for smaller residuals of the deformation due to the inaccuracy of the physical model. The algorithm was tested against the results of the published literature using 4D CT data from 10 lung cancer patients. The target registration error (TRE) of 3000 landmarks with the proposed method (1.37+/-0.89 mm) was significantly lower than that with the popular B-spline based free form deformation (FFD) registration (4.5+/-3.9 mm), and was smaller than that using the B-spline based FFD registration with the sliding constraint (1.66+/-1.14 mm) or using the B-spline based FFD registration on segmented lungs (1.47+/-1.1 mm). A paired t-test showed that the improvement of registration performance with the proposed method was significant (p<0.01). The propose method also achieved the best registration performance on the landmarks near lung surfaces. Since biomechanical models captured most of the lung deformation, the final estimated deformation field was more physically plausible.
Computation of Sound Propagation by Boundary Element Method
NASA Technical Reports Server (NTRS)
Guo, Yueping
2005-01-01
This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.
Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568
Balshi, M. S.; McGuire, A.D.; Duffy, P.; Flannigan, M.; Walsh, J.; Melillo, J.
2009-01-01
Fire is a common disturbance in the North American boreal forest that influences ecosystem structure and function. The temporal and spatial dynamics of fire are likely to be altered as climate continues to change. In this study, we ask the question: how will area burned in boreal North America by wildfire respond to future changes in climate? To evaluate this question, we developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5?? (latitude ?? longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was substantially more predictable in the western portion of boreal North America than in eastern Canada. Burned area was also not very predictable in areas of substantial topographic relief and in areas along the transition between boreal forest and tundra. At the scale of Alaska and western Canada, the empirical fire models explain on the order of 82% of the variation in annual area burned for the period 1960-2002. July temperature was the most frequently occurring predictor across all models, but the fuel moisture codes for the months June through August (as a group) entered the models as the most important predictors of annual area burned. To predict changes in the temporal and spatial dynamics of fire under future climate, the empirical fire models used output from the Canadian Climate Center CGCM2 global climate model to predict annual area burned through the year 2100 across Alaska and western Canada. Relative to 1991-2000, the results suggest that average area burned per decade will double by 2041-2050 and will increase on the order of 3.5-5.5 times by the last decade of the 21st century. To improve the ability to better predict wildfire across Alaska and Canada, future research should focus on incorporating additional effects of long-term and successional vegetation changes on area burned to account more fully for interactions among fire, climate, and vegetation dynamics. ?? 2009 The Authors Journal compilation ?? 2009 Blackwell Publishing Ltd.
Ahmed, B.; Ahmad, J.; Guy, G.
1994-09-01
A finite elements method coupled with the Preisach model of hysteresis is used to compute-the ferrite losses in medium power transformers (10--60 kVA) working at relatively high frequencies (20--60 kHz) and with an excitation level of about 0.3 Tesla. The dynamic evolution of the permeability is taken into account. The simple and doubly cubic spline functions are used to account for temperature effects respectively on electric and on magnetic parameters of the ferrite cores. The results are compared with test data obtained with 3C8 and B50 ferrites at different frequencies.
Calibration method of microgrid polarimeters with image interpolation.
Chen, Zhenyue; Wang, Xia; Liang, Rongguang
2015-02-10
Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU. PMID:25968013
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
Zhao, J.M.; Tan, J.Y.; Liu, L.H.
2013-01-01
A new second order form of radiative transfer equation (named MSORTE) is proposed, which overcomes the singularity problem of a previously proposed second order radiative transfer equation [J.E. Morel, B.T. Adams, T. Noh, J.M. McGhee, T.M. Evans, T.J. Urbatsch, Spatial discretizations for self-adjoint forms of the radiative transfer equations, J. Comput. Phys. 214 (1) (2006) 12-40 (where it was termed SAAI), J.M. Zhao, L.H. Liu, Second order radiative transfer equation and its properties of numerical solution using finite element method, Numer. Heat Transfer B 51 (2007) 391-409] in dealing with inhomogeneous media where some locations have very small/zero extinction coefficient. The MSORTE contains a naturally introduced diffusion (or second order) term which provides better numerical property than the classic first order radiative transfer equation (RTE). The stability and convergence characteristics of the MSORTE discretized by central difference scheme is analyzed theoretically, and the better numerical stability of the second order form radiative transfer equations than the RTE when discretized by the central difference type method is proved. A collocation meshless method is developed based on the MSORTE to solve radiative transfer in inhomogeneous media. Several critical test cases are taken to verify the performance of the presented method. The collocation meshless method based on the MSORTE is demonstrated to be capable of stably and accurately solve radiative transfer in strongly inhomogeneous media, media with void region and even with discontinuous extinction coefficient.
A MR-TRUS registration method for ultrasound-guided prostate interventions
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Rossi, Peter; Mao, Hui; Jani, Ashesh B.; Ogunleye, Tomi; Curran, Walter J.; Liu, Tian
2015-03-01
In this paper, we reported a MR-TRUS prostate registration method that uses a subject-specific prostate strain model to improve MR-targeted, US-guided prostate interventions (e.g., biopsy and radiotherapy). The proposed algorithm combines a subject-specific prostate strain model with a Bspline transformation to register the prostate gland of the MRI to the TRUS images. The prostate strain model was obtained through US elastography and a 3D strain map of the prostate was generated. The B-spline transformation was calculated by minimizing Euclidean distance between MR and TRUS prostate surfaces. This prostate stain map was used to constrain the B-spline-based transformation to predict and compensate for the internal prostate-gland deformation. This method was validated with a prostate-phantom experiment and a pilot study of 5 prostate-cancer patients. For the phantom study, the mean target registration error (TRE) was 1.3 mm. MR-TRUS registration was also successfully performed for 5 patients with a mean TRE less than 2 mm. The proposed registration method may provide an accurate and robust means of estimating internal prostate-gland deformation, and could be valuable for prostate-cancer diagnosis and treatment.
Design of a Variational Multiscale Method for Turbulent Compressible Flows
NASA Technical Reports Server (NTRS)
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.
NASA Technical Reports Server (NTRS)
Smith, Ralph C.
1994-01-01
A Galerkin method for systems of PDE's in circular geometries is presented with motivating problems being drawn from structural, acoustic, and structural acoustic applications. Depending upon the application under consideration, piecewise splines or Legendre polynomials are used when approximating the system dynamics with modifications included to incorporate the analytic solution decay near the coordinate singularity. This provides an efficient method which retains its accuracy throughout the circular domain without degradation at singularity. Because the problems under consideration are linear or weakly nonlinear with constant or piecewise constant coefficients, transform methods for the problems are not investigated. While the specific method is developed for the two dimensional wave equations on a circular domain and the equation of transverse motion for a thin circular plate, examples demonstrating the extension of the techniques to a fully coupled structural acoustic system are used to illustrate the flexibility of the method when approximating the dynamics of more complex systems.